Test Report: Docker_Linux_crio 21997

                    
                      ee66eb73e5650a3c34c21fac75605dac5b258565:2025-12-02:42611
                    
                

Test fail (48/415)

Order failed test Duration
38 TestAddons/serial/Volcano 0.28
44 TestAddons/parallel/Registry 14.42
45 TestAddons/parallel/RegistryCreds 0.49
46 TestAddons/parallel/Ingress 148.72
47 TestAddons/parallel/InspektorGadget 5.33
48 TestAddons/parallel/MetricsServer 5.34
50 TestAddons/parallel/CSI 53.33
51 TestAddons/parallel/Headlamp 2.68
52 TestAddons/parallel/CloudSpanner 5.27
53 TestAddons/parallel/LocalPath 12.27
54 TestAddons/parallel/NvidiaDevicePlugin 5.27
55 TestAddons/parallel/Yakd 5.27
56 TestAddons/parallel/AmdGpuDevicePlugin 5.28
106 TestFunctional/parallel/ServiceCmdConnect 603.07
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.03
133 TestFunctional/parallel/ServiceCmd/DeployApp 600.66
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.22
140 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.06
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.41
143 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.22
144 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.44
161 TestFunctional/parallel/ServiceCmd/HTTPS 0.57
162 TestFunctional/parallel/ServiceCmd/Format 0.57
163 TestFunctional/parallel/ServiceCmd/URL 0.57
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 603.2
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.09
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 600.68
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.06
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.74
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.32
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.25
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.4
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.57
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.55
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.56
294 TestJSONOutput/pause/Command 2.31
300 TestJSONOutput/unpause/Command 1.81
379 TestPause/serial/Pause 5.68
444 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 3.4
456 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.42
460 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.57
461 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.53
471 TestStartStop/group/newest-cni/serial/Pause 6.38
479 TestStartStop/group/old-k8s-version/serial/Pause 7.27
483 TestStartStop/group/no-preload/serial/Pause 6.58
487 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.22
490 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.36
496 TestStartStop/group/embed-certs/serial/Pause 6.18
x
+
TestAddons/serial/Volcano (0.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-893295 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-893295 addons disable volcano --alsologtostderr -v=1: exit status 11 (274.945519ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:56:55.582293  420667 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:56:55.582452  420667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:56:55.582465  420667 out.go:374] Setting ErrFile to fd 2...
	I1202 19:56:55.582473  420667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:56:55.582710  420667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 19:56:55.583011  420667 mustload.go:66] Loading cluster: addons-893295
	I1202 19:56:55.583393  420667 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:56:55.583418  420667 addons.go:622] checking whether the cluster is paused
	I1202 19:56:55.583506  420667 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:56:55.583525  420667 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:56:55.583926  420667 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:56:55.605730  420667 ssh_runner.go:195] Run: systemctl --version
	I1202 19:56:55.605807  420667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:56:55.625132  420667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:56:55.726590  420667 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:56:55.726682  420667 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:56:55.758392  420667 cri.go:89] found id: "72a3a94a8615446f6a8a6edf8cab89a31462a9125890a07caa7b5c08f54ee5d4"
	I1202 19:56:55.758433  420667 cri.go:89] found id: "3fc3b9c2bb5465a31c0448a05bdfa005e3690110089411631dc7f034b6d8ba5f"
	I1202 19:56:55.758440  420667 cri.go:89] found id: "23592b1014e085ea0e5ab3db08387563e82cae3f3801aefb1a36803352f4b32c"
	I1202 19:56:55.758446  420667 cri.go:89] found id: "4873f6a4745b98e6565829135d48f208fc3b8c8fc38349268058cfe66db69ace"
	I1202 19:56:55.758451  420667 cri.go:89] found id: "69202c0144e36fa98f89f3e4dcc0bb6766cd1a5e7765438a217890a210ccc213"
	I1202 19:56:55.758458  420667 cri.go:89] found id: "e272a50ae70cef4e55de4fc5c4b0afb42c240aef2f0e61c0f58d21f32bb4b1b8"
	I1202 19:56:55.758462  420667 cri.go:89] found id: "343bfc0b495bea2a196f645318c6f732f4aac4d10f89f12fe35398625eac34a6"
	I1202 19:56:55.758466  420667 cri.go:89] found id: "c935f2bdad559803c1b224bb424e2d6a8e3f939cc705debca52e51d3b73805cb"
	I1202 19:56:55.758471  420667 cri.go:89] found id: "2021a9af4b97cf9f19cd51daff4057de8ce4a98c1392ab4618729a6e1fdbe890"
	I1202 19:56:55.758480  420667 cri.go:89] found id: "7d3c2329b0b0c2e623e8d3059a441a596800bfcc5ff55d233343c158bb68d997"
	I1202 19:56:55.758489  420667 cri.go:89] found id: "33d9c5ffbca0f707ad94361bf00ebbc97925e1784dd973ef7bd8245741da9b67"
	I1202 19:56:55.758494  420667 cri.go:89] found id: "c59167a3c785bc464e3e63318df704b0084b4a2a24721b883033175b6f4b533f"
	I1202 19:56:55.758502  420667 cri.go:89] found id: "1d0670321bc4abe2d7954d0d6f908cf4e3863170f2e522b0100392c768577198"
	I1202 19:56:55.758508  420667 cri.go:89] found id: "9012f9d6215d108610b3c6096d8b9fd68c47c3b0a9ba15cab4f13cc9e385d4b9"
	I1202 19:56:55.758515  420667 cri.go:89] found id: "457ec4512e89c116a7c5ba880e93b4b91cf5fc694ff53ccf03533d6e1e36de9b"
	I1202 19:56:55.758523  420667 cri.go:89] found id: "91253d86ed19be0b0e1a31e49336ee85f71ca41d7f491fcc1fd6cd2978993ba0"
	I1202 19:56:55.758528  420667 cri.go:89] found id: "1a4586dbac8e8d1828435d72cdf3947bd1869e463e0102cc7b6664ebbeddeacf"
	I1202 19:56:55.758535  420667 cri.go:89] found id: "548b1d008679ffab8ee06c2f360e832860d3a7904cf593b25d31675b7bb892f9"
	I1202 19:56:55.758539  420667 cri.go:89] found id: "92d33e649bb3a4d1fbfddebd2c29786df9d0f9bbe8c4df37c931d3fb4cae82a7"
	I1202 19:56:55.758542  420667 cri.go:89] found id: "36e4834af56306fb62768ad3dbe8f24dbfa293561c0c41bee1c6d418ce06f454"
	I1202 19:56:55.758550  420667 cri.go:89] found id: "1053c12fee90a817e22976e0dc30541fb27c049e02c7c5af353833a13b30e982"
	I1202 19:56:55.758554  420667 cri.go:89] found id: "87e76e15e8595d052d66a4d86ee8b1416a8f60a669646ecbdd55cf8343b8db42"
	I1202 19:56:55.758558  420667 cri.go:89] found id: "54de7a8ca3420358423254cbf3d9a5a5e7140b7f46e22139375e40856000099c"
	I1202 19:56:55.758563  420667 cri.go:89] found id: "64bbafcaa8986f6e93390db3e1aa160fe3cecdd54cdd91e940adc5db87fefb45"
	I1202 19:56:55.758572  420667 cri.go:89] found id: ""
	I1202 19:56:55.758644  420667 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 19:56:55.774495  420667 out.go:203] 
	W1202 19:56:55.775747  420667 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:56:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:56:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 19:56:55.775781  420667 out.go:285] * 
	* 
	W1202 19:56:55.779840  420667 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:56:55.781299  420667 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-893295 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.28s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.272805ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-86wz6" [8dd65e02-986d-4a9b-9796-d9014d33d6d4] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002811067s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-stnrw" [e1efa7a9-b967-4abf-8104-14eb332f881f] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004278656s
addons_test.go:392: (dbg) Run:  kubectl --context addons-893295 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-893295 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-893295 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.921968784s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-893295 ip
2025/12/02 19:57:19 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-893295 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-893295 addons disable registry --alsologtostderr -v=1: exit status 11 (256.243704ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:57:19.833942  422610 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:57:19.834225  422610 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:57:19.834234  422610 out.go:374] Setting ErrFile to fd 2...
	I1202 19:57:19.834239  422610 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:57:19.834455  422610 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 19:57:19.834720  422610 mustload.go:66] Loading cluster: addons-893295
	I1202 19:57:19.835042  422610 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:19.835063  422610 addons.go:622] checking whether the cluster is paused
	I1202 19:57:19.835180  422610 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:19.835199  422610 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:57:19.835609  422610 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:57:19.854873  422610 ssh_runner.go:195] Run: systemctl --version
	I1202 19:57:19.854929  422610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:57:19.873887  422610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:57:19.973163  422610 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:57:19.973254  422610 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:57:20.003308  422610 cri.go:89] found id: "72a3a94a8615446f6a8a6edf8cab89a31462a9125890a07caa7b5c08f54ee5d4"
	I1202 19:57:20.003332  422610 cri.go:89] found id: "3fc3b9c2bb5465a31c0448a05bdfa005e3690110089411631dc7f034b6d8ba5f"
	I1202 19:57:20.003336  422610 cri.go:89] found id: "23592b1014e085ea0e5ab3db08387563e82cae3f3801aefb1a36803352f4b32c"
	I1202 19:57:20.003339  422610 cri.go:89] found id: "4873f6a4745b98e6565829135d48f208fc3b8c8fc38349268058cfe66db69ace"
	I1202 19:57:20.003342  422610 cri.go:89] found id: "69202c0144e36fa98f89f3e4dcc0bb6766cd1a5e7765438a217890a210ccc213"
	I1202 19:57:20.003346  422610 cri.go:89] found id: "e272a50ae70cef4e55de4fc5c4b0afb42c240aef2f0e61c0f58d21f32bb4b1b8"
	I1202 19:57:20.003349  422610 cri.go:89] found id: "343bfc0b495bea2a196f645318c6f732f4aac4d10f89f12fe35398625eac34a6"
	I1202 19:57:20.003352  422610 cri.go:89] found id: "c935f2bdad559803c1b224bb424e2d6a8e3f939cc705debca52e51d3b73805cb"
	I1202 19:57:20.003355  422610 cri.go:89] found id: "2021a9af4b97cf9f19cd51daff4057de8ce4a98c1392ab4618729a6e1fdbe890"
	I1202 19:57:20.003360  422610 cri.go:89] found id: "7d3c2329b0b0c2e623e8d3059a441a596800bfcc5ff55d233343c158bb68d997"
	I1202 19:57:20.003363  422610 cri.go:89] found id: "33d9c5ffbca0f707ad94361bf00ebbc97925e1784dd973ef7bd8245741da9b67"
	I1202 19:57:20.003366  422610 cri.go:89] found id: "c59167a3c785bc464e3e63318df704b0084b4a2a24721b883033175b6f4b533f"
	I1202 19:57:20.003369  422610 cri.go:89] found id: "1d0670321bc4abe2d7954d0d6f908cf4e3863170f2e522b0100392c768577198"
	I1202 19:57:20.003371  422610 cri.go:89] found id: "9012f9d6215d108610b3c6096d8b9fd68c47c3b0a9ba15cab4f13cc9e385d4b9"
	I1202 19:57:20.003376  422610 cri.go:89] found id: "457ec4512e89c116a7c5ba880e93b4b91cf5fc694ff53ccf03533d6e1e36de9b"
	I1202 19:57:20.003400  422610 cri.go:89] found id: "91253d86ed19be0b0e1a31e49336ee85f71ca41d7f491fcc1fd6cd2978993ba0"
	I1202 19:57:20.003409  422610 cri.go:89] found id: "1a4586dbac8e8d1828435d72cdf3947bd1869e463e0102cc7b6664ebbeddeacf"
	I1202 19:57:20.003412  422610 cri.go:89] found id: "548b1d008679ffab8ee06c2f360e832860d3a7904cf593b25d31675b7bb892f9"
	I1202 19:57:20.003415  422610 cri.go:89] found id: "92d33e649bb3a4d1fbfddebd2c29786df9d0f9bbe8c4df37c931d3fb4cae82a7"
	I1202 19:57:20.003418  422610 cri.go:89] found id: "36e4834af56306fb62768ad3dbe8f24dbfa293561c0c41bee1c6d418ce06f454"
	I1202 19:57:20.003421  422610 cri.go:89] found id: "1053c12fee90a817e22976e0dc30541fb27c049e02c7c5af353833a13b30e982"
	I1202 19:57:20.003423  422610 cri.go:89] found id: "87e76e15e8595d052d66a4d86ee8b1416a8f60a669646ecbdd55cf8343b8db42"
	I1202 19:57:20.003426  422610 cri.go:89] found id: "54de7a8ca3420358423254cbf3d9a5a5e7140b7f46e22139375e40856000099c"
	I1202 19:57:20.003429  422610 cri.go:89] found id: "64bbafcaa8986f6e93390db3e1aa160fe3cecdd54cdd91e940adc5db87fefb45"
	I1202 19:57:20.003432  422610 cri.go:89] found id: ""
	I1202 19:57:20.003470  422610 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 19:57:20.018596  422610 out.go:203] 
	W1202 19:57:20.019860  422610 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:57:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:57:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 19:57:20.019880  422610 out.go:285] * 
	* 
	W1202 19:57:20.024164  422610 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:57:20.025603  422610 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-893295 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.42s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.49s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.090517ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-893295
addons_test.go:332: (dbg) Run:  kubectl --context addons-893295 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-893295 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-893295 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (281.094588ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:57:23.429265  423528 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:57:23.429411  423528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:57:23.429423  423528 out.go:374] Setting ErrFile to fd 2...
	I1202 19:57:23.429430  423528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:57:23.429628  423528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 19:57:23.429929  423528 mustload.go:66] Loading cluster: addons-893295
	I1202 19:57:23.430306  423528 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:23.430336  423528 addons.go:622] checking whether the cluster is paused
	I1202 19:57:23.430440  423528 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:23.430465  423528 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:57:23.430845  423528 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:57:23.452945  423528 ssh_runner.go:195] Run: systemctl --version
	I1202 19:57:23.453019  423528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:57:23.473853  423528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:57:23.579616  423528 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:57:23.579706  423528 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:57:23.612264  423528 cri.go:89] found id: "72a3a94a8615446f6a8a6edf8cab89a31462a9125890a07caa7b5c08f54ee5d4"
	I1202 19:57:23.612288  423528 cri.go:89] found id: "3fc3b9c2bb5465a31c0448a05bdfa005e3690110089411631dc7f034b6d8ba5f"
	I1202 19:57:23.612292  423528 cri.go:89] found id: "23592b1014e085ea0e5ab3db08387563e82cae3f3801aefb1a36803352f4b32c"
	I1202 19:57:23.612295  423528 cri.go:89] found id: "4873f6a4745b98e6565829135d48f208fc3b8c8fc38349268058cfe66db69ace"
	I1202 19:57:23.612298  423528 cri.go:89] found id: "69202c0144e36fa98f89f3e4dcc0bb6766cd1a5e7765438a217890a210ccc213"
	I1202 19:57:23.612301  423528 cri.go:89] found id: "e272a50ae70cef4e55de4fc5c4b0afb42c240aef2f0e61c0f58d21f32bb4b1b8"
	I1202 19:57:23.612307  423528 cri.go:89] found id: "343bfc0b495bea2a196f645318c6f732f4aac4d10f89f12fe35398625eac34a6"
	I1202 19:57:23.612310  423528 cri.go:89] found id: "c935f2bdad559803c1b224bb424e2d6a8e3f939cc705debca52e51d3b73805cb"
	I1202 19:57:23.612313  423528 cri.go:89] found id: "2021a9af4b97cf9f19cd51daff4057de8ce4a98c1392ab4618729a6e1fdbe890"
	I1202 19:57:23.612319  423528 cri.go:89] found id: "7d3c2329b0b0c2e623e8d3059a441a596800bfcc5ff55d233343c158bb68d997"
	I1202 19:57:23.612331  423528 cri.go:89] found id: "33d9c5ffbca0f707ad94361bf00ebbc97925e1784dd973ef7bd8245741da9b67"
	I1202 19:57:23.612335  423528 cri.go:89] found id: "c59167a3c785bc464e3e63318df704b0084b4a2a24721b883033175b6f4b533f"
	I1202 19:57:23.612338  423528 cri.go:89] found id: "1d0670321bc4abe2d7954d0d6f908cf4e3863170f2e522b0100392c768577198"
	I1202 19:57:23.612341  423528 cri.go:89] found id: "9012f9d6215d108610b3c6096d8b9fd68c47c3b0a9ba15cab4f13cc9e385d4b9"
	I1202 19:57:23.612344  423528 cri.go:89] found id: "457ec4512e89c116a7c5ba880e93b4b91cf5fc694ff53ccf03533d6e1e36de9b"
	I1202 19:57:23.612353  423528 cri.go:89] found id: "91253d86ed19be0b0e1a31e49336ee85f71ca41d7f491fcc1fd6cd2978993ba0"
	I1202 19:57:23.612360  423528 cri.go:89] found id: "1a4586dbac8e8d1828435d72cdf3947bd1869e463e0102cc7b6664ebbeddeacf"
	I1202 19:57:23.612364  423528 cri.go:89] found id: "548b1d008679ffab8ee06c2f360e832860d3a7904cf593b25d31675b7bb892f9"
	I1202 19:57:23.612367  423528 cri.go:89] found id: "92d33e649bb3a4d1fbfddebd2c29786df9d0f9bbe8c4df37c931d3fb4cae82a7"
	I1202 19:57:23.612369  423528 cri.go:89] found id: "36e4834af56306fb62768ad3dbe8f24dbfa293561c0c41bee1c6d418ce06f454"
	I1202 19:57:23.612379  423528 cri.go:89] found id: "1053c12fee90a817e22976e0dc30541fb27c049e02c7c5af353833a13b30e982"
	I1202 19:57:23.612386  423528 cri.go:89] found id: "87e76e15e8595d052d66a4d86ee8b1416a8f60a669646ecbdd55cf8343b8db42"
	I1202 19:57:23.612389  423528 cri.go:89] found id: "54de7a8ca3420358423254cbf3d9a5a5e7140b7f46e22139375e40856000099c"
	I1202 19:57:23.612391  423528 cri.go:89] found id: "64bbafcaa8986f6e93390db3e1aa160fe3cecdd54cdd91e940adc5db87fefb45"
	I1202 19:57:23.612395  423528 cri.go:89] found id: ""
	I1202 19:57:23.612438  423528 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 19:57:23.628642  423528 out.go:203] 
	W1202 19:57:23.630147  423528 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:57:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:57:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 19:57:23.630174  423528 out.go:285] * 
	* 
	W1202 19:57:23.634259  423528 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:57:23.635530  423528 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-893295 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.49s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (148.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-893295 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-893295 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-893295 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [f1d2e10c-06e6-4c04-a51c-529d33d0feed] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [f1d2e10c-06e6-4c04-a51c-529d33d0feed] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004430484s
I1202 19:57:29.490236  411032 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-893295 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-893295 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.905741249s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-893295 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-893295 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-893295
helpers_test.go:243: (dbg) docker inspect addons-893295:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fb1b06b464b8e1cc0be3d869922ad319eca24c0f73d9dd3623150e70a87dad64",
	        "Created": "2025-12-02T19:55:02.532086274Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 413487,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T19:55:02.571981154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/fb1b06b464b8e1cc0be3d869922ad319eca24c0f73d9dd3623150e70a87dad64/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fb1b06b464b8e1cc0be3d869922ad319eca24c0f73d9dd3623150e70a87dad64/hostname",
	        "HostsPath": "/var/lib/docker/containers/fb1b06b464b8e1cc0be3d869922ad319eca24c0f73d9dd3623150e70a87dad64/hosts",
	        "LogPath": "/var/lib/docker/containers/fb1b06b464b8e1cc0be3d869922ad319eca24c0f73d9dd3623150e70a87dad64/fb1b06b464b8e1cc0be3d869922ad319eca24c0f73d9dd3623150e70a87dad64-json.log",
	        "Name": "/addons-893295",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-893295:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-893295",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fb1b06b464b8e1cc0be3d869922ad319eca24c0f73d9dd3623150e70a87dad64",
	                "LowerDir": "/var/lib/docker/overlay2/51fe9b3afe0210445cec2e2cd1c061e3ff5977b7927ed6e339e2f8b682072296-init/diff:/var/lib/docker/overlay2/49bb44e987885c48600d5ae7ad3c81cad82a00f38070c0460882c5746d9fae59/diff",
	                "MergedDir": "/var/lib/docker/overlay2/51fe9b3afe0210445cec2e2cd1c061e3ff5977b7927ed6e339e2f8b682072296/merged",
	                "UpperDir": "/var/lib/docker/overlay2/51fe9b3afe0210445cec2e2cd1c061e3ff5977b7927ed6e339e2f8b682072296/diff",
	                "WorkDir": "/var/lib/docker/overlay2/51fe9b3afe0210445cec2e2cd1c061e3ff5977b7927ed6e339e2f8b682072296/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-893295",
	                "Source": "/var/lib/docker/volumes/addons-893295/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-893295",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-893295",
	                "name.minikube.sigs.k8s.io": "addons-893295",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "4ff1f263f2ab23bcf25eef62bdfec9099c29759ef04c86831ba29bad921bbe62",
	            "SandboxKey": "/var/run/docker/netns/4ff1f263f2ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-893295": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "15be3e52e05572810bbe7de119c632146eb5eadd30ee490522b569aa947428b3",
	                    "EndpointID": "a16df17d6d0f82d89ddd5d38762572af57ae1a1fc7c6e0cce2a6ec038b7dfc3a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "da:b8:cf:f3:41:f2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-893295",
	                        "fb1b06b464b8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-893295 -n addons-893295
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-893295 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-893295 logs -n 25: (1.264158935s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-599276 --alsologtostderr --binary-mirror http://127.0.0.1:35789 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-599276 │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │                     │
	│ delete  │ -p binary-mirror-599276                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-599276 │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │ 02 Dec 25 19:54 UTC │
	│ addons  │ disable dashboard -p addons-893295                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-893295        │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │                     │
	│ addons  │ enable dashboard -p addons-893295                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-893295        │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │                     │
	│ start   │ -p addons-893295 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-893295        │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │ 02 Dec 25 19:56 UTC │
	│ addons  │ addons-893295 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-893295        │ jenkins │ v1.37.0 │ 02 Dec 25 19:56 UTC │                     │
	│ addons  │ addons-893295 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-893295        │ jenkins │ v1.37.0 │ 02 Dec 25 19:57 UTC │                     │
	│ addons  │ enable headlamp -p addons-893295 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-893295        │ jenkins │ v1.37.0 │ 02 Dec 25 19:57 UTC │                     │
	│ addons  │ addons-893295 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-893295        │ jenkins │ v1.37.0 │ 02 Dec 25 19:57 UTC │                     │
	│ addons  │ addons-893295 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-893295        │ jenkins │ v1.37.0 │ 02 Dec 25 19:57 UTC │                     │
	│ addons  │ addons-893295 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-893295        │ jenkins │ v1.37.0 │ 02 Dec 25 19:57 UTC │                     │
	│ addons  │ addons-893295 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-893295        │ jenkins │ v1.37.0 │ 02 Dec 25 19:57 UTC │                     │
	│ ip      │ addons-893295 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-893295        │ jenkins │ v1.37.0 │ 02 Dec 25 19:57 UTC │ 02 Dec 25 19:57 UTC │
	│ addons  │ addons-893295 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-893295        │ jenkins │ v1.37.0 │ 02 Dec 25 19:57 UTC │                     │
	│ addons  │ addons-893295 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-893295        │ jenkins │ v1.37.0 │ 02 Dec 25 19:57 UTC │                     │
	│ ssh     │ addons-893295 ssh cat /opt/local-path-provisioner/pvc-686522b0-186d-48bc-b51e-e42cc4a9a58b_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-893295        │ jenkins │ v1.37.0 │ 02 Dec 25 19:57 UTC │ 02 Dec 25 19:57 UTC │
	│ addons  │ addons-893295 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-893295        │ jenkins │ v1.37.0 │ 02 Dec 25 19:57 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-893295                                                                                                                                                                                                                                                                                                                                                                                           │ addons-893295        │ jenkins │ v1.37.0 │ 02 Dec 25 19:57 UTC │ 02 Dec 25 19:57 UTC │
	│ addons  │ addons-893295 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-893295        │ jenkins │ v1.37.0 │ 02 Dec 25 19:57 UTC │                     │
	│ addons  │ addons-893295 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-893295        │ jenkins │ v1.37.0 │ 02 Dec 25 19:57 UTC │                     │
	│ addons  │ addons-893295 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-893295        │ jenkins │ v1.37.0 │ 02 Dec 25 19:57 UTC │                     │
	│ ssh     │ addons-893295 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-893295        │ jenkins │ v1.37.0 │ 02 Dec 25 19:57 UTC │                     │
	│ addons  │ addons-893295 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-893295        │ jenkins │ v1.37.0 │ 02 Dec 25 19:58 UTC │                     │
	│ addons  │ addons-893295 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-893295        │ jenkins │ v1.37.0 │ 02 Dec 25 19:58 UTC │                     │
	│ ip      │ addons-893295 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-893295        │ jenkins │ v1.37.0 │ 02 Dec 25 19:59 UTC │ 02 Dec 25 19:59 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:54:41
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:54:41.754876  412831 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:54:41.755164  412831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:54:41.755175  412831 out.go:374] Setting ErrFile to fd 2...
	I1202 19:54:41.755180  412831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:54:41.755413  412831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 19:54:41.756120  412831 out.go:368] Setting JSON to false
	I1202 19:54:41.757095  412831 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5826,"bootTime":1764699456,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 19:54:41.757156  412831 start.go:143] virtualization: kvm guest
	I1202 19:54:41.759099  412831 out.go:179] * [addons-893295] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 19:54:41.760375  412831 notify.go:221] Checking for updates...
	I1202 19:54:41.760393  412831 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 19:54:41.761678  412831 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:54:41.763325  412831 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 19:54:41.764668  412831 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 19:54:41.765832  412831 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 19:54:41.766922  412831 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:54:41.768321  412831 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:54:41.792860  412831 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 19:54:41.793026  412831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:54:41.854417  412831 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-02 19:54:41.843751139 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 19:54:41.854523  412831 docker.go:319] overlay module found
	I1202 19:54:41.856369  412831 out.go:179] * Using the docker driver based on user configuration
	I1202 19:54:41.857423  412831 start.go:309] selected driver: docker
	I1202 19:54:41.857444  412831 start.go:927] validating driver "docker" against <nil>
	I1202 19:54:41.857459  412831 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:54:41.858082  412831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:54:41.917656  412831 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-02 19:54:41.907921772 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 19:54:41.917867  412831 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 19:54:41.918131  412831 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:54:41.919739  412831 out.go:179] * Using Docker driver with root privileges
	I1202 19:54:41.920795  412831 cni.go:84] Creating CNI manager for ""
	I1202 19:54:41.920867  412831 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:54:41.920879  412831 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 19:54:41.920990  412831 start.go:353] cluster config:
	{Name:addons-893295 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-893295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1202 19:54:41.922295  412831 out.go:179] * Starting "addons-893295" primary control-plane node in "addons-893295" cluster
	I1202 19:54:41.923633  412831 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:54:41.924802  412831 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:54:41.925799  412831 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:54:41.925843  412831 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 19:54:41.925854  412831 cache.go:65] Caching tarball of preloaded images
	I1202 19:54:41.925904  412831 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:54:41.925966  412831 preload.go:238] Found /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 19:54:41.925980  412831 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:54:41.926350  412831 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/config.json ...
	I1202 19:54:41.926386  412831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/config.json: {Name:mk60be7980c08c9778afd7456fa6ca920b75e519 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:54:41.943426  412831 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 19:54:41.943564  412831 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1202 19:54:41.943582  412831 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory, skipping pull
	I1202 19:54:41.943587  412831 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in cache, skipping pull
	I1202 19:54:41.943594  412831 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	I1202 19:54:41.943598  412831 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from local cache
	I1202 19:54:54.733307  412831 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from cached tarball
	I1202 19:54:54.733352  412831 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:54:54.733415  412831 start.go:360] acquireMachinesLock for addons-893295: {Name:mk42cd6f39fb536484d21dc2475baeee68e879a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:54:54.733551  412831 start.go:364] duration metric: took 108.678µs to acquireMachinesLock for "addons-893295"
	I1202 19:54:54.733590  412831 start.go:93] Provisioning new machine with config: &{Name:addons-893295 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-893295 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:54:54.733666  412831 start.go:125] createHost starting for "" (driver="docker")
	I1202 19:54:54.735565  412831 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1202 19:54:54.735813  412831 start.go:159] libmachine.API.Create for "addons-893295" (driver="docker")
	I1202 19:54:54.735854  412831 client.go:173] LocalClient.Create starting
	I1202 19:54:54.736000  412831 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem
	I1202 19:54:54.847027  412831 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem
	I1202 19:54:54.932568  412831 cli_runner.go:164] Run: docker network inspect addons-893295 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 19:54:54.949980  412831 cli_runner.go:211] docker network inspect addons-893295 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 19:54:54.950097  412831 network_create.go:284] running [docker network inspect addons-893295] to gather additional debugging logs...
	I1202 19:54:54.950123  412831 cli_runner.go:164] Run: docker network inspect addons-893295
	W1202 19:54:54.967909  412831 cli_runner.go:211] docker network inspect addons-893295 returned with exit code 1
	I1202 19:54:54.967947  412831 network_create.go:287] error running [docker network inspect addons-893295]: docker network inspect addons-893295: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-893295 not found
	I1202 19:54:54.967967  412831 network_create.go:289] output of [docker network inspect addons-893295]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-893295 not found
	
	** /stderr **
	I1202 19:54:54.968130  412831 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:54:54.986008  412831 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f4e9c0}
	I1202 19:54:54.986055  412831 network_create.go:124] attempt to create docker network addons-893295 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1202 19:54:54.986125  412831 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-893295 addons-893295
	I1202 19:54:55.034573  412831 network_create.go:108] docker network addons-893295 192.168.49.0/24 created
	I1202 19:54:55.034613  412831 kic.go:121] calculated static IP "192.168.49.2" for the "addons-893295" container
	I1202 19:54:55.034677  412831 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 19:54:55.052174  412831 cli_runner.go:164] Run: docker volume create addons-893295 --label name.minikube.sigs.k8s.io=addons-893295 --label created_by.minikube.sigs.k8s.io=true
	I1202 19:54:55.071162  412831 oci.go:103] Successfully created a docker volume addons-893295
	I1202 19:54:55.071268  412831 cli_runner.go:164] Run: docker run --rm --name addons-893295-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-893295 --entrypoint /usr/bin/test -v addons-893295:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 19:54:58.586037  412831 cli_runner.go:217] Completed: docker run --rm --name addons-893295-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-893295 --entrypoint /usr/bin/test -v addons-893295:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (3.514704843s)
	I1202 19:54:58.586091  412831 oci.go:107] Successfully prepared a docker volume addons-893295
	I1202 19:54:58.586187  412831 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:54:58.586205  412831 kic.go:194] Starting extracting preloaded images to volume ...
	I1202 19:54:58.586275  412831 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-893295:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1202 19:55:02.456234  412831 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-893295:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (3.869896147s)
	I1202 19:55:02.456275  412831 kic.go:203] duration metric: took 3.870066852s to extract preloaded images to volume ...
	W1202 19:55:02.456383  412831 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1202 19:55:02.456421  412831 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1202 19:55:02.456466  412831 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 19:55:02.515529  412831 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-893295 --name addons-893295 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-893295 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-893295 --network addons-893295 --ip 192.168.49.2 --volume addons-893295:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 19:55:02.791780  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Running}}
	I1202 19:55:02.811308  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:02.830838  412831 cli_runner.go:164] Run: docker exec addons-893295 stat /var/lib/dpkg/alternatives/iptables
	I1202 19:55:02.879029  412831 oci.go:144] the created container "addons-893295" has a running status.
	I1202 19:55:02.879062  412831 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa...
	I1202 19:55:02.998487  412831 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 19:55:03.026337  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:03.050478  412831 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 19:55:03.050512  412831 kic_runner.go:114] Args: [docker exec --privileged addons-893295 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 19:55:03.092774  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:03.116093  412831 machine.go:94] provisionDockerMachine start ...
	I1202 19:55:03.117453  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:03.143504  412831 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:03.143861  412831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 19:55:03.143879  412831 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:55:03.144561  412831 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45870->127.0.0.1:33148: read: connection reset by peer
	I1202 19:55:06.288387  412831 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-893295
	
	I1202 19:55:06.288423  412831 ubuntu.go:182] provisioning hostname "addons-893295"
	I1202 19:55:06.288493  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:06.307948  412831 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:06.308238  412831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 19:55:06.308254  412831 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-893295 && echo "addons-893295" | sudo tee /etc/hostname
	I1202 19:55:06.459279  412831 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-893295
	
	I1202 19:55:06.459398  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:06.479773  412831 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:06.480016  412831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 19:55:06.480036  412831 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-893295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-893295/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-893295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:55:06.621432  412831 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:55:06.621470  412831 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-407427/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-407427/.minikube}
	I1202 19:55:06.621539  412831 ubuntu.go:190] setting up certificates
	I1202 19:55:06.621560  412831 provision.go:84] configureAuth start
	I1202 19:55:06.621638  412831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-893295
	I1202 19:55:06.640269  412831 provision.go:143] copyHostCerts
	I1202 19:55:06.640365  412831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem (1082 bytes)
	I1202 19:55:06.640500  412831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem (1123 bytes)
	I1202 19:55:06.640579  412831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem (1675 bytes)
	I1202 19:55:06.640645  412831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem org=jenkins.addons-893295 san=[127.0.0.1 192.168.49.2 addons-893295 localhost minikube]
	I1202 19:55:06.772196  412831 provision.go:177] copyRemoteCerts
	I1202 19:55:06.772260  412831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:55:06.772296  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:06.792279  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:06.893428  412831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 19:55:06.913887  412831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 19:55:06.932439  412831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:55:06.951739  412831 provision.go:87] duration metric: took 330.156948ms to configureAuth
	I1202 19:55:06.951774  412831 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:55:06.952010  412831 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:06.952149  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:06.971190  412831 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:06.971456  412831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 19:55:06.971474  412831 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:55:07.253416  412831 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:55:07.253445  412831 machine.go:97] duration metric: took 4.137328753s to provisionDockerMachine
	I1202 19:55:07.253457  412831 client.go:176] duration metric: took 12.517596549s to LocalClient.Create
	I1202 19:55:07.253473  412831 start.go:167] duration metric: took 12.517661857s to libmachine.API.Create "addons-893295"
	I1202 19:55:07.253481  412831 start.go:293] postStartSetup for "addons-893295" (driver="docker")
	I1202 19:55:07.253490  412831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:55:07.253542  412831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:55:07.253580  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:07.272132  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:07.374390  412831 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:55:07.378013  412831 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:55:07.378058  412831 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:55:07.378087  412831 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 19:55:07.378162  412831 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 19:55:07.378200  412831 start.go:296] duration metric: took 124.711475ms for postStartSetup
	I1202 19:55:07.378551  412831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-893295
	I1202 19:55:07.396607  412831 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/config.json ...
	I1202 19:55:07.396907  412831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:55:07.396956  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:07.414819  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:07.512647  412831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:55:07.517612  412831 start.go:128] duration metric: took 12.783929128s to createHost
	I1202 19:55:07.517651  412831 start.go:83] releasing machines lock for "addons-893295", held for 12.784076086s
	I1202 19:55:07.517749  412831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-893295
	I1202 19:55:07.536778  412831 ssh_runner.go:195] Run: cat /version.json
	I1202 19:55:07.536840  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:07.536845  412831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:55:07.536937  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:07.556216  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:07.556702  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:07.708228  412831 ssh_runner.go:195] Run: systemctl --version
	I1202 19:55:07.715149  412831 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:55:07.750385  412831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:55:07.755412  412831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:55:07.755484  412831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:55:07.783980  412831 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 19:55:07.784010  412831 start.go:496] detecting cgroup driver to use...
	I1202 19:55:07.784052  412831 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 19:55:07.784133  412831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:55:07.801289  412831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:55:07.814283  412831 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:55:07.814348  412831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:55:07.831847  412831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:55:07.850341  412831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:55:07.933038  412831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:55:08.021963  412831 docker.go:234] disabling docker service ...
	I1202 19:55:08.022044  412831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:55:08.041032  412831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:55:08.054642  412831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:55:08.137632  412831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:55:08.220632  412831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:55:08.233740  412831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:55:08.248856  412831 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:55:08.248925  412831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:08.260653  412831 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 19:55:08.260720  412831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:08.270455  412831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:08.280043  412831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:08.289807  412831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:55:08.298915  412831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:08.308358  412831 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:08.322943  412831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:08.332807  412831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:55:08.340604  412831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:55:08.348826  412831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:55:08.428301  412831 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:55:08.561309  412831 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:55:08.561395  412831 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:55:08.565598  412831 start.go:564] Will wait 60s for crictl version
	I1202 19:55:08.565656  412831 ssh_runner.go:195] Run: which crictl
	I1202 19:55:08.569697  412831 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:55:08.597089  412831 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:55:08.597176  412831 ssh_runner.go:195] Run: crio --version
	I1202 19:55:08.626472  412831 ssh_runner.go:195] Run: crio --version
	I1202 19:55:08.658250  412831 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:55:08.659939  412831 cli_runner.go:164] Run: docker network inspect addons-893295 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:55:08.677856  412831 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:55:08.682103  412831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:55:08.692602  412831 kubeadm.go:884] updating cluster {Name:addons-893295 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-893295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 19:55:08.692734  412831 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:55:08.692780  412831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:55:08.724755  412831 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:55:08.724779  412831 crio.go:433] Images already preloaded, skipping extraction
	I1202 19:55:08.724834  412831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:55:08.750165  412831 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:55:08.750189  412831 cache_images.go:86] Images are preloaded, skipping loading
	I1202 19:55:08.750199  412831 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1202 19:55:08.750330  412831 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-893295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-893295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:55:08.750418  412831 ssh_runner.go:195] Run: crio config
	I1202 19:55:08.797293  412831 cni.go:84] Creating CNI manager for ""
	I1202 19:55:08.797319  412831 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:55:08.797339  412831 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 19:55:08.797367  412831 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-893295 NodeName:addons-893295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 19:55:08.797504  412831 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-893295"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 19:55:08.797589  412831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:55:08.806141  412831 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:55:08.806215  412831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 19:55:08.814502  412831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 19:55:08.828195  412831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:55:08.844151  412831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1202 19:55:08.857737  412831 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 19:55:08.861578  412831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:55:08.872216  412831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:55:08.950351  412831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:55:08.974685  412831 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295 for IP: 192.168.49.2
	I1202 19:55:08.974711  412831 certs.go:195] generating shared ca certs ...
	I1202 19:55:08.974731  412831 certs.go:227] acquiring lock for ca certs: {Name:mkea11924193dd4dc459e16d70207bde383196df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:08.974887  412831 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key
	I1202 19:55:09.190324  412831 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt ...
	I1202 19:55:09.190366  412831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt: {Name:mk3b995f99d1d87432666ba663c87cd170b0d45e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:09.190625  412831 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key ...
	I1202 19:55:09.190649  412831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key: {Name:mkf5b188ab09a4301c9639eae09b9b97499c97f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:09.190779  412831 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key
	I1202 19:55:09.297834  412831 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt ...
	I1202 19:55:09.297877  412831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt: {Name:mk00dc72744d82467866a30b889d56ba015b653a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:09.298131  412831 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key ...
	I1202 19:55:09.298152  412831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key: {Name:mk1a2c24e9cddc950384320ea1a06283a2afe5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:09.298273  412831 certs.go:257] generating profile certs ...
	I1202 19:55:09.298386  412831 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.key
	I1202 19:55:09.298407  412831 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt with IP's: []
	I1202 19:55:09.506249  412831 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt ...
	I1202 19:55:09.506287  412831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: {Name:mkae91d3e3f021742810a61285095b3b97621504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:09.506472  412831 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.key ...
	I1202 19:55:09.506487  412831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.key: {Name:mk8c5ce4c85c9f45100bd5dbcecca0cdda41ceea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:09.506570  412831 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/apiserver.key.159d5c69
	I1202 19:55:09.506590  412831 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/apiserver.crt.159d5c69 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1202 19:55:09.595757  412831 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/apiserver.crt.159d5c69 ...
	I1202 19:55:09.595801  412831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/apiserver.crt.159d5c69: {Name:mk95e9c3c58b870a262a683dd3e41ccd67ea9368 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:09.595969  412831 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/apiserver.key.159d5c69 ...
	I1202 19:55:09.595982  412831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/apiserver.key.159d5c69: {Name:mk7e8370a840617572b29fad6cafa3d079b47f6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:09.596052  412831 certs.go:382] copying /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/apiserver.crt.159d5c69 -> /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/apiserver.crt
	I1202 19:55:09.596178  412831 certs.go:386] copying /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/apiserver.key.159d5c69 -> /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/apiserver.key
	I1202 19:55:09.596238  412831 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/proxy-client.key
	I1202 19:55:09.596263  412831 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/proxy-client.crt with IP's: []
	I1202 19:55:09.738826  412831 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/proxy-client.crt ...
	I1202 19:55:09.738859  412831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/proxy-client.crt: {Name:mkce0c088d80376cc5c2a26e657f973c5fcb8f04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:09.739036  412831 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/proxy-client.key ...
	I1202 19:55:09.739050  412831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/proxy-client.key: {Name:mka340db84365af4e52e952419508f47449397f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:09.739246  412831 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 19:55:09.739299  412831 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:55:09.739326  412831 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:55:09.739350  412831 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem (1675 bytes)
	I1202 19:55:09.740027  412831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:55:09.759656  412831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:55:09.778604  412831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:55:09.797882  412831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 19:55:09.817362  412831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 19:55:09.836277  412831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 19:55:09.855507  412831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:55:09.875785  412831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:55:09.896362  412831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:55:09.918539  412831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 19:55:09.932766  412831 ssh_runner.go:195] Run: openssl version
	I1202 19:55:09.939387  412831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:55:09.951729  412831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:55:09.955840  412831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:55:09.955911  412831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:55:09.991210  412831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:55:10.001439  412831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:55:10.005695  412831 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 19:55:10.005754  412831 kubeadm.go:401] StartCluster: {Name:addons-893295 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-893295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:55:10.005838  412831 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:55:10.005898  412831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:55:10.036256  412831 cri.go:89] found id: ""
	I1202 19:55:10.036323  412831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 19:55:10.044848  412831 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 19:55:10.053098  412831 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 19:55:10.053159  412831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 19:55:10.061270  412831 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 19:55:10.061290  412831 kubeadm.go:158] found existing configuration files:
	
	I1202 19:55:10.061332  412831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 19:55:10.069972  412831 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 19:55:10.070031  412831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 19:55:10.077899  412831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 19:55:10.086424  412831 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 19:55:10.086493  412831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 19:55:10.094133  412831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 19:55:10.102314  412831 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 19:55:10.102394  412831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 19:55:10.110490  412831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 19:55:10.118638  412831 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 19:55:10.118714  412831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 19:55:10.126771  412831 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 19:55:10.186476  412831 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1202 19:55:10.246526  412831 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 19:55:19.778704  412831 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1202 19:55:19.778787  412831 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 19:55:19.778901  412831 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 19:55:19.778985  412831 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1202 19:55:19.779047  412831 kubeadm.go:319] OS: Linux
	I1202 19:55:19.779132  412831 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 19:55:19.779223  412831 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 19:55:19.779329  412831 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 19:55:19.779432  412831 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 19:55:19.779513  412831 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 19:55:19.779591  412831 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 19:55:19.779671  412831 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 19:55:19.779738  412831 kubeadm.go:319] CGROUPS_IO: enabled
	I1202 19:55:19.779851  412831 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 19:55:19.779962  412831 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 19:55:19.780123  412831 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 19:55:19.780191  412831 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 19:55:19.782154  412831 out.go:252]   - Generating certificates and keys ...
	I1202 19:55:19.782262  412831 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 19:55:19.782375  412831 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 19:55:19.782444  412831 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 19:55:19.782515  412831 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 19:55:19.782567  412831 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 19:55:19.782610  412831 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 19:55:19.782666  412831 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 19:55:19.782812  412831 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-893295 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 19:55:19.782870  412831 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 19:55:19.782970  412831 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-893295 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 19:55:19.783033  412831 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 19:55:19.783104  412831 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 19:55:19.783186  412831 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 19:55:19.783286  412831 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 19:55:19.783356  412831 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 19:55:19.783430  412831 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 19:55:19.783499  412831 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 19:55:19.783593  412831 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 19:55:19.783680  412831 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 19:55:19.783748  412831 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 19:55:19.783810  412831 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 19:55:19.785689  412831 out.go:252]   - Booting up control plane ...
	I1202 19:55:19.785837  412831 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 19:55:19.785955  412831 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 19:55:19.786051  412831 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 19:55:19.786186  412831 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 19:55:19.786342  412831 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 19:55:19.786536  412831 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 19:55:19.786645  412831 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 19:55:19.786689  412831 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 19:55:19.786820  412831 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 19:55:19.786926  412831 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 19:55:19.786985  412831 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001087089s
	I1202 19:55:19.787086  412831 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 19:55:19.787159  412831 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1202 19:55:19.787232  412831 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 19:55:19.787300  412831 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1202 19:55:19.787364  412831 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.80207997s
	I1202 19:55:19.787417  412831 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.024506725s
	I1202 19:55:19.787471  412831 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001062315s
	I1202 19:55:19.787567  412831 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 19:55:19.787739  412831 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 19:55:19.787821  412831 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 19:55:19.788017  412831 kubeadm.go:319] [mark-control-plane] Marking the node addons-893295 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 19:55:19.788097  412831 kubeadm.go:319] [bootstrap-token] Using token: tkytdp.l0r3f1mch4ddid0g
	I1202 19:55:19.789689  412831 out.go:252]   - Configuring RBAC rules ...
	I1202 19:55:19.789795  412831 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 19:55:19.789880  412831 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 19:55:19.790012  412831 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 19:55:19.790154  412831 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 19:55:19.790299  412831 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 19:55:19.790424  412831 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 19:55:19.790581  412831 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 19:55:19.790644  412831 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 19:55:19.790700  412831 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 19:55:19.790707  412831 kubeadm.go:319] 
	I1202 19:55:19.790757  412831 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 19:55:19.790763  412831 kubeadm.go:319] 
	I1202 19:55:19.790826  412831 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 19:55:19.790833  412831 kubeadm.go:319] 
	I1202 19:55:19.790855  412831 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 19:55:19.790906  412831 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 19:55:19.790951  412831 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 19:55:19.790956  412831 kubeadm.go:319] 
	I1202 19:55:19.791009  412831 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 19:55:19.791017  412831 kubeadm.go:319] 
	I1202 19:55:19.791059  412831 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 19:55:19.791078  412831 kubeadm.go:319] 
	I1202 19:55:19.791125  412831 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 19:55:19.791224  412831 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 19:55:19.791293  412831 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 19:55:19.791311  412831 kubeadm.go:319] 
	I1202 19:55:19.791393  412831 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 19:55:19.791476  412831 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 19:55:19.791486  412831 kubeadm.go:319] 
	I1202 19:55:19.791622  412831 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tkytdp.l0r3f1mch4ddid0g \
	I1202 19:55:19.791779  412831 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:48349ffa2369b87cd15480ece56edb1c5c5d97a828930f9dfbf1d1ceccf80ff4 \
	I1202 19:55:19.791809  412831 kubeadm.go:319] 	--control-plane 
	I1202 19:55:19.791813  412831 kubeadm.go:319] 
	I1202 19:55:19.791883  412831 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 19:55:19.791892  412831 kubeadm.go:319] 
	I1202 19:55:19.791964  412831 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tkytdp.l0r3f1mch4ddid0g \
	I1202 19:55:19.792096  412831 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:48349ffa2369b87cd15480ece56edb1c5c5d97a828930f9dfbf1d1ceccf80ff4 
	I1202 19:55:19.792113  412831 cni.go:84] Creating CNI manager for ""
	I1202 19:55:19.792123  412831 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:55:19.794028  412831 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1202 19:55:19.795583  412831 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 19:55:19.800666  412831 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1202 19:55:19.800693  412831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 19:55:19.815448  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 19:55:20.039869  412831 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 19:55:20.039955  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:55:20.039961  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-893295 minikube.k8s.io/updated_at=2025_12_02T19_55_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92 minikube.k8s.io/name=addons-893295 minikube.k8s.io/primary=true
	I1202 19:55:20.051840  412831 ops.go:34] apiserver oom_adj: -16
	I1202 19:55:20.118951  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:55:20.619723  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:55:21.119446  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:55:21.619614  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:55:22.119291  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:55:22.619810  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:55:23.119576  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:55:23.619291  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:55:24.119861  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:55:24.619924  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:55:24.687062  412831 kubeadm.go:1114] duration metric: took 4.64717155s to wait for elevateKubeSystemPrivileges
	I1202 19:55:24.687121  412831 kubeadm.go:403] duration metric: took 14.681374363s to StartCluster
	I1202 19:55:24.687150  412831 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:24.687266  412831 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 19:55:24.687672  412831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:24.687895  412831 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 19:55:24.687891  412831 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:55:24.687910  412831 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1202 19:55:24.688125  412831 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:24.688141  412831 addons.go:70] Setting default-storageclass=true in profile "addons-893295"
	I1202 19:55:24.688171  412831 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-893295"
	I1202 19:55:24.688187  412831 addons.go:70] Setting gcp-auth=true in profile "addons-893295"
	I1202 19:55:24.688194  412831 addons.go:70] Setting cloud-spanner=true in profile "addons-893295"
	I1202 19:55:24.688213  412831 addons.go:70] Setting ingress-dns=true in profile "addons-893295"
	I1202 19:55:24.688220  412831 addons.go:70] Setting registry-creds=true in profile "addons-893295"
	I1202 19:55:24.688229  412831 addons.go:239] Setting addon cloud-spanner=true in "addons-893295"
	I1202 19:55:24.688240  412831 addons.go:70] Setting storage-provisioner=true in profile "addons-893295"
	I1202 19:55:24.688241  412831 addons.go:70] Setting ingress=true in profile "addons-893295"
	I1202 19:55:24.688250  412831 addons.go:239] Setting addon registry-creds=true in "addons-893295"
	I1202 19:55:24.688259  412831 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-893295"
	I1202 19:55:24.688265  412831 addons.go:239] Setting addon ingress=true in "addons-893295"
	I1202 19:55:24.688250  412831 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-893295"
	I1202 19:55:24.688279  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.688289  412831 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-893295"
	I1202 19:55:24.688291  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.688354  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.688368  412831 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-893295"
	I1202 19:55:24.688406  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.688519  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.688545  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.688767  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.688819  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.688844  412831 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-893295"
	I1202 19:55:24.688849  412831 addons.go:70] Setting volcano=true in profile "addons-893295"
	I1202 19:55:24.688874  412831 addons.go:239] Setting addon volcano=true in "addons-893295"
	I1202 19:55:24.688877  412831 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-893295"
	I1202 19:55:24.688901  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.688914  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.688948  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.688948  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.689387  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.688176  412831 addons.go:70] Setting yakd=true in profile "addons-893295"
	I1202 19:55:24.690604  412831 addons.go:239] Setting addon yakd=true in "addons-893295"
	I1202 19:55:24.690637  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.691157  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.688207  412831 mustload.go:66] Loading cluster: addons-893295
	I1202 19:55:24.691560  412831 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:24.691830  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.688231  412831 addons.go:239] Setting addon ingress-dns=true in "addons-893295"
	I1202 19:55:24.692028  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.692563  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.693149  412831 out.go:179] * Verifying Kubernetes components...
	I1202 19:55:24.689453  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.689518  412831 addons.go:70] Setting volumesnapshots=true in profile "addons-893295"
	I1202 19:55:24.694596  412831 addons.go:239] Setting addon volumesnapshots=true in "addons-893295"
	I1202 19:55:24.694655  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.689630  412831 addons.go:70] Setting inspektor-gadget=true in profile "addons-893295"
	I1202 19:55:24.694918  412831 addons.go:239] Setting addon inspektor-gadget=true in "addons-893295"
	I1202 19:55:24.694945  412831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:55:24.694966  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.689621  412831 addons.go:70] Setting registry=true in profile "addons-893295"
	I1202 19:55:24.695114  412831 addons.go:239] Setting addon registry=true in "addons-893295"
	I1202 19:55:24.695145  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.689650  412831 addons.go:70] Setting metrics-server=true in profile "addons-893295"
	I1202 19:55:24.695329  412831 addons.go:239] Setting addon metrics-server=true in "addons-893295"
	I1202 19:55:24.695372  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.695944  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.688251  412831 addons.go:239] Setting addon storage-provisioner=true in "addons-893295"
	I1202 19:55:24.689660  412831 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-893295"
	I1202 19:55:24.696280  412831 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-893295"
	I1202 19:55:24.696284  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.696310  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.696419  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.697661  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.702499  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.704613  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.704757  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	W1202 19:55:24.757879  412831 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1202 19:55:24.759015  412831 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-893295"
	I1202 19:55:24.759087  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.759584  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.771330  412831 addons.go:239] Setting addon default-storageclass=true in "addons-893295"
	I1202 19:55:24.771388  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.771854  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.771889  412831 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1202 19:55:24.772648  412831 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1202 19:55:24.773121  412831 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1202 19:55:24.773139  412831 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1202 19:55:24.773212  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.778638  412831 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1202 19:55:24.779809  412831 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1202 19:55:24.783353  412831 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1202 19:55:24.783540  412831 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1202 19:55:24.783643  412831 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1202 19:55:24.784723  412831 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1202 19:55:24.788944  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1202 19:55:24.789028  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.790849  412831 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1202 19:55:24.790909  412831 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1202 19:55:24.790952  412831 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 19:55:24.791497  412831 out.go:179]   - Using image docker.io/registry:3.0.0
	I1202 19:55:24.792241  412831 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 19:55:24.792260  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1202 19:55:24.792325  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.796769  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.798146  412831 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1202 19:55:24.799188  412831 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1202 19:55:24.799555  412831 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 19:55:24.799227  412831 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1202 19:55:24.799918  412831 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1202 19:55:24.800221  412831 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1202 19:55:24.800238  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1202 19:55:24.800345  412831 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1202 19:55:24.800362  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1202 19:55:24.800390  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.800429  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.800545  412831 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1202 19:55:24.800572  412831 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1202 19:55:24.800621  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.801056  412831 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 19:55:24.801346  412831 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1202 19:55:24.801415  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1202 19:55:24.801499  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.801711  412831 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 19:55:24.801871  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1202 19:55:24.802056  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.802120  412831 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1202 19:55:24.802230  412831 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:55:24.802463  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 19:55:24.802522  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.804813  412831 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1202 19:55:24.806541  412831 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1202 19:55:24.808968  412831 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1202 19:55:24.809595  412831 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1202 19:55:24.809609  412831 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1202 19:55:24.809618  412831 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1202 19:55:24.809692  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.811671  412831 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 19:55:24.811695  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1202 19:55:24.811759  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.814202  412831 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 19:55:24.814231  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1202 19:55:24.814303  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.827402  412831 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 19:55:24.827429  412831 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 19:55:24.827489  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.830165  412831 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1202 19:55:24.831474  412831 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1202 19:55:24.831501  412831 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1202 19:55:24.831575  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.846063  412831 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 19:55:24.847959  412831 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1202 19:55:24.849436  412831 out.go:179]   - Using image docker.io/busybox:stable
	I1202 19:55:24.850602  412831 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 19:55:24.850672  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1202 19:55:24.850772  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.852930  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.854741  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.872500  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.890953  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.891121  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.891679  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.892393  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.893916  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.895275  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.894771  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.901582  412831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:55:24.908003  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.908427  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.911951  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	W1202 19:55:24.915727  412831 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1202 19:55:24.915871  412831 retry.go:31] will retry after 340.994629ms: ssh: handshake failed: EOF
	I1202 19:55:24.921948  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.926260  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:25.016289  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 19:55:25.036438  412831 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1202 19:55:25.036535  412831 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1202 19:55:25.048578  412831 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1202 19:55:25.048609  412831 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1202 19:55:25.060271  412831 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1202 19:55:25.060309  412831 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1202 19:55:25.060644  412831 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1202 19:55:25.060665  412831 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1202 19:55:25.065373  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 19:55:25.066642  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:55:25.079047  412831 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1202 19:55:25.079089  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1202 19:55:25.082777  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1202 19:55:25.088726  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 19:55:25.099612  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1202 19:55:25.101549  412831 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1202 19:55:25.101601  412831 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1202 19:55:25.101865  412831 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1202 19:55:25.101886  412831 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1202 19:55:25.105336  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 19:55:25.107022  412831 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1202 19:55:25.107049  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1202 19:55:25.108553  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1202 19:55:25.114559  412831 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1202 19:55:25.114583  412831 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1202 19:55:25.117681  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:55:25.136645  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1202 19:55:25.146338  412831 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1202 19:55:25.146386  412831 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1202 19:55:25.151639  412831 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1202 19:55:25.151666  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1202 19:55:25.152917  412831 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1202 19:55:25.152999  412831 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1202 19:55:25.167533  412831 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1202 19:55:25.167642  412831 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1202 19:55:25.187828  412831 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 19:55:25.187858  412831 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1202 19:55:25.199252  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1202 19:55:25.199885  412831 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1202 19:55:25.199948  412831 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1202 19:55:25.227865  412831 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1202 19:55:25.229960  412831 node_ready.go:35] waiting up to 6m0s for node "addons-893295" to be "Ready" ...
	I1202 19:55:25.238768  412831 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1202 19:55:25.238800  412831 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1202 19:55:25.253319  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 19:55:25.272639  412831 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1202 19:55:25.272739  412831 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1202 19:55:25.307846  412831 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 19:55:25.307870  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1202 19:55:25.336100  412831 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1202 19:55:25.336218  412831 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1202 19:55:25.380585  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 19:55:25.398469  412831 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1202 19:55:25.398498  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1202 19:55:25.480662  412831 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1202 19:55:25.480696  412831 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1202 19:55:25.520218  412831 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1202 19:55:25.520243  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1202 19:55:25.556899  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 19:55:25.588940  412831 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1202 19:55:25.588977  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1202 19:55:25.627336  412831 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1202 19:55:25.627391  412831 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1202 19:55:25.702577  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1202 19:55:25.749908  412831 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-893295" context rescaled to 1 replicas
	I1202 19:55:26.091479  412831 addons.go:495] Verifying addon registry=true in "addons-893295"
	I1202 19:55:26.091806  412831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.00303983s)
	I1202 19:55:26.093482  412831 out.go:179] * Verifying registry addon...
	I1202 19:55:26.095516  412831 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1202 19:55:26.108287  412831 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1202 19:55:26.108315  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:26.141423  412831 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-893295 service yakd-dashboard -n yakd-dashboard
	
	I1202 19:55:26.148403  412831 addons.go:495] Verifying addon metrics-server=true in "addons-893295"
	I1202 19:55:26.599346  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:26.829446  412831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.4488126s)
	W1202 19:55:26.829513  412831 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 19:55:26.829544  412831 retry.go:31] will retry after 323.685556ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 19:55:26.829599  412831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.272656515s)
	I1202 19:55:26.829631  412831 addons.go:495] Verifying addon ingress=true in "addons-893295"
	I1202 19:55:26.829868  412831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.127249428s)
	I1202 19:55:26.829904  412831 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-893295"
	I1202 19:55:26.832320  412831 out.go:179] * Verifying ingress addon...
	I1202 19:55:26.832398  412831 out.go:179] * Verifying csi-hostpath-driver addon...
	I1202 19:55:26.836516  412831 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1202 19:55:26.836520  412831 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1202 19:55:26.844670  412831 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1202 19:55:26.844699  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:26.844894  412831 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1202 19:55:26.844917  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:27.099740  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:27.153957  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1202 19:55:27.233300  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:27.341296  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:27.341333  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:27.599775  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:27.840421  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:27.840540  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:28.098823  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:28.341030  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:28.341063  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:28.599780  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:28.841120  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:28.841211  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:29.099914  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:29.233365  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:29.341004  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:29.341026  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:29.599634  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:29.663386  412831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.50937655s)
	I1202 19:55:29.840475  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:29.840488  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:30.099616  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:30.341004  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:30.341163  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:30.599376  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:30.840276  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:30.840370  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:31.099916  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:31.233455  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:31.340978  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:31.340997  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:31.598747  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:31.840699  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:31.840761  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:32.099800  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:32.340182  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:32.340271  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:32.405984  412831 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1202 19:55:32.406059  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:32.425316  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:32.533639  412831 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1202 19:55:32.547299  412831 addons.go:239] Setting addon gcp-auth=true in "addons-893295"
	I1202 19:55:32.547371  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:32.547756  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:32.566179  412831 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1202 19:55:32.566233  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:32.585787  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:32.599653  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:32.686920  412831 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 19:55:32.688240  412831 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1202 19:55:32.689414  412831 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1202 19:55:32.689442  412831 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1202 19:55:32.704413  412831 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1202 19:55:32.704443  412831 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1202 19:55:32.717871  412831 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 19:55:32.717896  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1202 19:55:32.731351  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 19:55:32.840586  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:32.840636  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:33.057781  412831 addons.go:495] Verifying addon gcp-auth=true in "addons-893295"
	I1202 19:55:33.062217  412831 out.go:179] * Verifying gcp-auth addon...
	I1202 19:55:33.064810  412831 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1202 19:55:33.067625  412831 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1202 19:55:33.067647  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:33.099310  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:33.340103  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:33.340412  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:33.567677  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:33.598802  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:33.733681  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:33.839767  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:33.839772  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:34.069223  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:34.099158  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:34.340482  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:34.340669  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:34.568584  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:34.599618  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:34.839833  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:34.839883  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:35.069008  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:35.098759  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:35.340094  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:35.340192  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:35.568090  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:35.598970  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:35.733920  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:35.840181  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:35.840201  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:36.068822  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:36.098699  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:36.339674  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:36.339734  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:36.568603  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:36.599153  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:36.840759  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:36.840773  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:37.068759  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:37.098671  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:37.340597  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:37.340728  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:37.568909  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:37.598780  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:37.839665  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:37.839727  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:38.068981  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:38.098912  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:38.233691  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:38.340039  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:38.340100  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:38.568063  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:38.598881  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:38.839244  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:38.839297  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:39.068222  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:39.098955  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:39.340161  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:39.340306  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:39.568546  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:39.599535  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:39.839744  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:39.839876  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:40.067864  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:40.098925  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:40.234051  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:40.340295  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:40.340352  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:40.568319  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:40.599280  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:40.840062  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:40.840207  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:41.067801  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:41.098792  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:41.339870  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:41.339960  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:41.567865  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:41.598837  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:41.840119  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:41.840143  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:42.068265  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:42.169469  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:42.340140  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:42.340188  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:42.567941  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:42.598664  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:42.733776  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:42.840157  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:42.840199  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:43.067676  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:43.098324  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:43.340373  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:43.340430  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:43.568318  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:43.599111  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:43.839938  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:43.840042  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:44.068386  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:44.099490  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:44.340432  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:44.340498  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:44.568336  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:44.599168  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:44.840582  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:44.840622  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:45.068650  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:45.098516  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:45.233473  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:45.340886  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:45.340882  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:45.568137  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:45.598962  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:45.839754  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:45.839879  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:46.069024  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:46.099278  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:46.340490  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:46.340561  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:46.568880  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:46.598670  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:46.840392  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:46.840565  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:47.068737  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:47.098492  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:47.233865  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:47.339905  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:47.339950  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:47.567747  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:47.598811  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:47.840845  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:47.840869  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:48.068891  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:48.098853  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:48.340337  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:48.340369  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:48.568229  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:48.599308  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:48.840253  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:48.840446  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:49.068129  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:49.098796  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:49.340145  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:49.340242  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:49.568308  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:49.599233  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:49.733090  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:49.840522  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:49.840717  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:50.071239  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:50.099169  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:50.340137  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:50.340339  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:50.567901  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:50.598840  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:50.840155  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:50.840179  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:51.068413  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:51.099585  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:51.340029  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:51.340102  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:51.567985  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:51.598736  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:51.733683  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:51.839982  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:51.840123  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:52.068219  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:52.098904  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:52.340050  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:52.340100  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:52.567673  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:52.598530  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:52.840759  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:52.840895  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:53.067984  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:53.099039  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:53.339807  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:53.339996  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:53.567730  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:53.598735  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:53.840714  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:53.840723  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:54.068640  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:54.098605  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:54.233531  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:54.340593  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:54.340668  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:54.568616  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:54.598700  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:54.839990  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:54.840222  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:55.068064  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:55.099027  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:55.340316  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:55.340527  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:55.568496  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:55.599815  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:55.839663  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:55.839762  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:56.068035  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:56.098846  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:56.233686  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:56.339861  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:56.339933  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:56.567976  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:56.598854  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:56.840347  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:56.840545  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:57.068370  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:57.099287  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:57.340591  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:57.340657  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:57.568563  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:57.599634  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:57.840658  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:57.840847  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:58.069106  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:58.099023  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:58.340340  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:58.340377  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:58.568686  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:58.598607  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:58.733812  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:58.839935  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:58.839997  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:59.067792  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:59.098963  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:59.340356  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:59.340523  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:59.568917  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:59.598724  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:59.839969  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:59.840113  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:00.068210  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:00.099436  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:00.340813  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:00.340877  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:00.568932  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:00.598791  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:00.839674  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:00.839754  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:01.068793  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:01.098709  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:56:01.233846  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:56:01.340105  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:01.340154  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:01.568435  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:01.599637  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:01.839944  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:01.840061  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:02.068381  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:02.099445  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:02.340607  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:02.340752  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:02.568713  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:02.598747  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:02.840544  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:02.840646  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:03.068858  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:03.098852  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:03.339824  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:03.339927  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:03.567799  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:03.599129  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:56:03.733059  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:56:03.840292  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:03.840377  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:04.068346  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:04.099413  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:04.340480  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:04.340546  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:04.568640  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:04.598401  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:04.839770  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:04.839782  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:05.068782  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:05.098659  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:05.340432  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:05.340505  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:05.568811  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:05.598888  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:56:05.734101  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:56:05.840110  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:05.840351  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:06.068458  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:06.099540  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:06.233255  412831 node_ready.go:49] node "addons-893295" is "Ready"
	I1202 19:56:06.233292  412831 node_ready.go:38] duration metric: took 41.003298051s for node "addons-893295" to be "Ready" ...
	I1202 19:56:06.233313  412831 api_server.go:52] waiting for apiserver process to appear ...
	I1202 19:56:06.233377  412831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:56:06.250258  412831 api_server.go:72] duration metric: took 41.562259874s to wait for apiserver process to appear ...
	I1202 19:56:06.250290  412831 api_server.go:88] waiting for apiserver healthz status ...
	I1202 19:56:06.250319  412831 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:56:06.255555  412831 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1202 19:56:06.256602  412831 api_server.go:141] control plane version: v1.34.2
	I1202 19:56:06.256629  412831 api_server.go:131] duration metric: took 6.328947ms to wait for apiserver health ...
	I1202 19:56:06.256639  412831 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 19:56:06.260222  412831 system_pods.go:59] 20 kube-system pods found
	I1202 19:56:06.260262  412831 system_pods.go:61] "amd-gpu-device-plugin-nklpz" [9d4535df-fe2e-4f5a-8273-23b1b3e6d8b8] Pending
	I1202 19:56:06.260276  412831 system_pods.go:61] "coredns-66bc5c9577-9mvmk" [ca5a6890-e2db-40a3-8302-3fcc4309e66a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:56:06.260286  412831 system_pods.go:61] "csi-hostpath-attacher-0" [86ea36d5-0952-4bf9-82dd-fb267c9a17fe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 19:56:06.260297  412831 system_pods.go:61] "csi-hostpath-resizer-0" [45b788c6-fc8c-49b8-883c-93d3160e893b] Pending
	I1202 19:56:06.260306  412831 system_pods.go:61] "csi-hostpathplugin-6h8dt" [782b735d-c731-4592-861f-0572e0581ce1] Pending
	I1202 19:56:06.260311  412831 system_pods.go:61] "etcd-addons-893295" [12b09750-804b-410b-8096-afb7db0b7cff] Running
	I1202 19:56:06.260320  412831 system_pods.go:61] "kindnet-bphsd" [035c64e4-9b5a-4fb5-9129-c78c186861ad] Running
	I1202 19:56:06.260324  412831 system_pods.go:61] "kube-apiserver-addons-893295" [44cedc90-0e81-4707-be65-2031c2da26db] Running
	I1202 19:56:06.260340  412831 system_pods.go:61] "kube-controller-manager-addons-893295" [f2a70c97-80a7-4072-8e06-31fdc7b7e92f] Running
	I1202 19:56:06.260349  412831 system_pods.go:61] "kube-ingress-dns-minikube" [8a7095ba-44a5-4e5c-bec7-847ffd18dc36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 19:56:06.260357  412831 system_pods.go:61] "kube-proxy-2bxgd" [32906a03-9bcd-402b-948d-bcc65caa49fc] Running
	I1202 19:56:06.260363  412831 system_pods.go:61] "kube-scheduler-addons-893295" [85e7b347-9ab5-45c7-a9d3-2f9cdb139280] Running
	I1202 19:56:06.260373  412831 system_pods.go:61] "metrics-server-85b7d694d7-fbhzv" [51840c60-3fa9-4717-85ec-69d3082c6537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 19:56:06.260382  412831 system_pods.go:61] "nvidia-device-plugin-daemonset-bkjsl" [ee51e4e2-139f-407a-a020-b6a91e40e7bf] Pending
	I1202 19:56:06.260388  412831 system_pods.go:61] "registry-6b586f9694-86wz6" [8dd65e02-986d-4a9b-9796-d9014d33d6d4] Pending
	I1202 19:56:06.260398  412831 system_pods.go:61] "registry-creds-764b6fb674-qwrlk" [242299e3-e588-4f0a-890d-da4c53cafcce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 19:56:06.260408  412831 system_pods.go:61] "registry-proxy-stnrw" [e1efa7a9-b967-4abf-8104-14eb332f881f] Pending
	I1202 19:56:06.260414  412831 system_pods.go:61] "snapshot-controller-7d9fbc56b8-57ls2" [808c977f-4e69-4d1b-ba59-e82fe31100c7] Pending
	I1202 19:56:06.260422  412831 system_pods.go:61] "snapshot-controller-7d9fbc56b8-kwz4l" [7b98a4b5-96b5-4d1d-b6c6-983f165030db] Pending
	I1202 19:56:06.260430  412831 system_pods.go:61] "storage-provisioner" [d1b4b030-354a-45e2-aa34-ff9768a43e99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 19:56:06.260438  412831 system_pods.go:74] duration metric: took 3.792353ms to wait for pod list to return data ...
	I1202 19:56:06.260451  412831 default_sa.go:34] waiting for default service account to be created ...
	I1202 19:56:06.264275  412831 default_sa.go:45] found service account: "default"
	I1202 19:56:06.264306  412831 default_sa.go:55] duration metric: took 3.846399ms for default service account to be created ...
	I1202 19:56:06.264319  412831 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 19:56:06.267772  412831 system_pods.go:86] 20 kube-system pods found
	I1202 19:56:06.267806  412831 system_pods.go:89] "amd-gpu-device-plugin-nklpz" [9d4535df-fe2e-4f5a-8273-23b1b3e6d8b8] Pending
	I1202 19:56:06.267817  412831 system_pods.go:89] "coredns-66bc5c9577-9mvmk" [ca5a6890-e2db-40a3-8302-3fcc4309e66a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:56:06.267826  412831 system_pods.go:89] "csi-hostpath-attacher-0" [86ea36d5-0952-4bf9-82dd-fb267c9a17fe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 19:56:06.267836  412831 system_pods.go:89] "csi-hostpath-resizer-0" [45b788c6-fc8c-49b8-883c-93d3160e893b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 19:56:06.267842  412831 system_pods.go:89] "csi-hostpathplugin-6h8dt" [782b735d-c731-4592-861f-0572e0581ce1] Pending
	I1202 19:56:06.267848  412831 system_pods.go:89] "etcd-addons-893295" [12b09750-804b-410b-8096-afb7db0b7cff] Running
	I1202 19:56:06.267854  412831 system_pods.go:89] "kindnet-bphsd" [035c64e4-9b5a-4fb5-9129-c78c186861ad] Running
	I1202 19:56:06.267862  412831 system_pods.go:89] "kube-apiserver-addons-893295" [44cedc90-0e81-4707-be65-2031c2da26db] Running
	I1202 19:56:06.267867  412831 system_pods.go:89] "kube-controller-manager-addons-893295" [f2a70c97-80a7-4072-8e06-31fdc7b7e92f] Running
	I1202 19:56:06.267879  412831 system_pods.go:89] "kube-ingress-dns-minikube" [8a7095ba-44a5-4e5c-bec7-847ffd18dc36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 19:56:06.267884  412831 system_pods.go:89] "kube-proxy-2bxgd" [32906a03-9bcd-402b-948d-bcc65caa49fc] Running
	I1202 19:56:06.267894  412831 system_pods.go:89] "kube-scheduler-addons-893295" [85e7b347-9ab5-45c7-a9d3-2f9cdb139280] Running
	I1202 19:56:06.267901  412831 system_pods.go:89] "metrics-server-85b7d694d7-fbhzv" [51840c60-3fa9-4717-85ec-69d3082c6537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 19:56:06.267910  412831 system_pods.go:89] "nvidia-device-plugin-daemonset-bkjsl" [ee51e4e2-139f-407a-a020-b6a91e40e7bf] Pending
	I1202 19:56:06.267916  412831 system_pods.go:89] "registry-6b586f9694-86wz6" [8dd65e02-986d-4a9b-9796-d9014d33d6d4] Pending
	I1202 19:56:06.267925  412831 system_pods.go:89] "registry-creds-764b6fb674-qwrlk" [242299e3-e588-4f0a-890d-da4c53cafcce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 19:56:06.267932  412831 system_pods.go:89] "registry-proxy-stnrw" [e1efa7a9-b967-4abf-8104-14eb332f881f] Pending
	I1202 19:56:06.267941  412831 system_pods.go:89] "snapshot-controller-7d9fbc56b8-57ls2" [808c977f-4e69-4d1b-ba59-e82fe31100c7] Pending
	I1202 19:56:06.267946  412831 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kwz4l" [7b98a4b5-96b5-4d1d-b6c6-983f165030db] Pending
	I1202 19:56:06.267956  412831 system_pods.go:89] "storage-provisioner" [d1b4b030-354a-45e2-aa34-ff9768a43e99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 19:56:06.267978  412831 retry.go:31] will retry after 232.050934ms: missing components: kube-dns
	I1202 19:56:06.339633  412831 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1202 19:56:06.339661  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:06.339638  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:06.507281  412831 system_pods.go:86] 20 kube-system pods found
	I1202 19:56:06.507327  412831 system_pods.go:89] "amd-gpu-device-plugin-nklpz" [9d4535df-fe2e-4f5a-8273-23b1b3e6d8b8] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1202 19:56:06.507341  412831 system_pods.go:89] "coredns-66bc5c9577-9mvmk" [ca5a6890-e2db-40a3-8302-3fcc4309e66a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:56:06.507352  412831 system_pods.go:89] "csi-hostpath-attacher-0" [86ea36d5-0952-4bf9-82dd-fb267c9a17fe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 19:56:06.507362  412831 system_pods.go:89] "csi-hostpath-resizer-0" [45b788c6-fc8c-49b8-883c-93d3160e893b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 19:56:06.507370  412831 system_pods.go:89] "csi-hostpathplugin-6h8dt" [782b735d-c731-4592-861f-0572e0581ce1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 19:56:06.507376  412831 system_pods.go:89] "etcd-addons-893295" [12b09750-804b-410b-8096-afb7db0b7cff] Running
	I1202 19:56:06.507383  412831 system_pods.go:89] "kindnet-bphsd" [035c64e4-9b5a-4fb5-9129-c78c186861ad] Running
	I1202 19:56:06.507415  412831 system_pods.go:89] "kube-apiserver-addons-893295" [44cedc90-0e81-4707-be65-2031c2da26db] Running
	I1202 19:56:06.507426  412831 system_pods.go:89] "kube-controller-manager-addons-893295" [f2a70c97-80a7-4072-8e06-31fdc7b7e92f] Running
	I1202 19:56:06.507444  412831 system_pods.go:89] "kube-ingress-dns-minikube" [8a7095ba-44a5-4e5c-bec7-847ffd18dc36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 19:56:06.507453  412831 system_pods.go:89] "kube-proxy-2bxgd" [32906a03-9bcd-402b-948d-bcc65caa49fc] Running
	I1202 19:56:06.507460  412831 system_pods.go:89] "kube-scheduler-addons-893295" [85e7b347-9ab5-45c7-a9d3-2f9cdb139280] Running
	I1202 19:56:06.507468  412831 system_pods.go:89] "metrics-server-85b7d694d7-fbhzv" [51840c60-3fa9-4717-85ec-69d3082c6537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 19:56:06.507478  412831 system_pods.go:89] "nvidia-device-plugin-daemonset-bkjsl" [ee51e4e2-139f-407a-a020-b6a91e40e7bf] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 19:56:06.507487  412831 system_pods.go:89] "registry-6b586f9694-86wz6" [8dd65e02-986d-4a9b-9796-d9014d33d6d4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 19:56:06.507497  412831 system_pods.go:89] "registry-creds-764b6fb674-qwrlk" [242299e3-e588-4f0a-890d-da4c53cafcce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 19:56:06.507505  412831 system_pods.go:89] "registry-proxy-stnrw" [e1efa7a9-b967-4abf-8104-14eb332f881f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 19:56:06.507513  412831 system_pods.go:89] "snapshot-controller-7d9fbc56b8-57ls2" [808c977f-4e69-4d1b-ba59-e82fe31100c7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 19:56:06.507523  412831 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kwz4l" [7b98a4b5-96b5-4d1d-b6c6-983f165030db] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 19:56:06.507531  412831 system_pods.go:89] "storage-provisioner" [d1b4b030-354a-45e2-aa34-ff9768a43e99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 19:56:06.507555  412831 retry.go:31] will retry after 279.7801ms: missing components: kube-dns
	I1202 19:56:06.604867  412831 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1202 19:56:06.604893  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:06.604927  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:06.793536  412831 system_pods.go:86] 20 kube-system pods found
	I1202 19:56:06.793579  412831 system_pods.go:89] "amd-gpu-device-plugin-nklpz" [9d4535df-fe2e-4f5a-8273-23b1b3e6d8b8] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1202 19:56:06.793590  412831 system_pods.go:89] "coredns-66bc5c9577-9mvmk" [ca5a6890-e2db-40a3-8302-3fcc4309e66a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:56:06.793601  412831 system_pods.go:89] "csi-hostpath-attacher-0" [86ea36d5-0952-4bf9-82dd-fb267c9a17fe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 19:56:06.793610  412831 system_pods.go:89] "csi-hostpath-resizer-0" [45b788c6-fc8c-49b8-883c-93d3160e893b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 19:56:06.793619  412831 system_pods.go:89] "csi-hostpathplugin-6h8dt" [782b735d-c731-4592-861f-0572e0581ce1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 19:56:06.793630  412831 system_pods.go:89] "etcd-addons-893295" [12b09750-804b-410b-8096-afb7db0b7cff] Running
	I1202 19:56:06.793642  412831 system_pods.go:89] "kindnet-bphsd" [035c64e4-9b5a-4fb5-9129-c78c186861ad] Running
	I1202 19:56:06.793647  412831 system_pods.go:89] "kube-apiserver-addons-893295" [44cedc90-0e81-4707-be65-2031c2da26db] Running
	I1202 19:56:06.793655  412831 system_pods.go:89] "kube-controller-manager-addons-893295" [f2a70c97-80a7-4072-8e06-31fdc7b7e92f] Running
	I1202 19:56:06.793667  412831 system_pods.go:89] "kube-ingress-dns-minikube" [8a7095ba-44a5-4e5c-bec7-847ffd18dc36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 19:56:06.793674  412831 system_pods.go:89] "kube-proxy-2bxgd" [32906a03-9bcd-402b-948d-bcc65caa49fc] Running
	I1202 19:56:06.793680  412831 system_pods.go:89] "kube-scheduler-addons-893295" [85e7b347-9ab5-45c7-a9d3-2f9cdb139280] Running
	I1202 19:56:06.793688  412831 system_pods.go:89] "metrics-server-85b7d694d7-fbhzv" [51840c60-3fa9-4717-85ec-69d3082c6537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 19:56:06.793696  412831 system_pods.go:89] "nvidia-device-plugin-daemonset-bkjsl" [ee51e4e2-139f-407a-a020-b6a91e40e7bf] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 19:56:06.793708  412831 system_pods.go:89] "registry-6b586f9694-86wz6" [8dd65e02-986d-4a9b-9796-d9014d33d6d4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 19:56:06.793716  412831 system_pods.go:89] "registry-creds-764b6fb674-qwrlk" [242299e3-e588-4f0a-890d-da4c53cafcce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 19:56:06.793725  412831 system_pods.go:89] "registry-proxy-stnrw" [e1efa7a9-b967-4abf-8104-14eb332f881f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 19:56:06.793732  412831 system_pods.go:89] "snapshot-controller-7d9fbc56b8-57ls2" [808c977f-4e69-4d1b-ba59-e82fe31100c7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 19:56:06.793743  412831 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kwz4l" [7b98a4b5-96b5-4d1d-b6c6-983f165030db] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 19:56:06.793751  412831 system_pods.go:89] "storage-provisioner" [d1b4b030-354a-45e2-aa34-ff9768a43e99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 19:56:06.793774  412831 retry.go:31] will retry after 442.819697ms: missing components: kube-dns
	I1202 19:56:06.840501  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:06.840669  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:07.069732  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:07.100154  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:07.242766  412831 system_pods.go:86] 20 kube-system pods found
	I1202 19:56:07.242806  412831 system_pods.go:89] "amd-gpu-device-plugin-nklpz" [9d4535df-fe2e-4f5a-8273-23b1b3e6d8b8] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1202 19:56:07.242818  412831 system_pods.go:89] "coredns-66bc5c9577-9mvmk" [ca5a6890-e2db-40a3-8302-3fcc4309e66a] Running
	I1202 19:56:07.242830  412831 system_pods.go:89] "csi-hostpath-attacher-0" [86ea36d5-0952-4bf9-82dd-fb267c9a17fe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 19:56:07.242839  412831 system_pods.go:89] "csi-hostpath-resizer-0" [45b788c6-fc8c-49b8-883c-93d3160e893b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 19:56:07.242855  412831 system_pods.go:89] "csi-hostpathplugin-6h8dt" [782b735d-c731-4592-861f-0572e0581ce1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 19:56:07.242861  412831 system_pods.go:89] "etcd-addons-893295" [12b09750-804b-410b-8096-afb7db0b7cff] Running
	I1202 19:56:07.242867  412831 system_pods.go:89] "kindnet-bphsd" [035c64e4-9b5a-4fb5-9129-c78c186861ad] Running
	I1202 19:56:07.242872  412831 system_pods.go:89] "kube-apiserver-addons-893295" [44cedc90-0e81-4707-be65-2031c2da26db] Running
	I1202 19:56:07.242887  412831 system_pods.go:89] "kube-controller-manager-addons-893295" [f2a70c97-80a7-4072-8e06-31fdc7b7e92f] Running
	I1202 19:56:07.242896  412831 system_pods.go:89] "kube-ingress-dns-minikube" [8a7095ba-44a5-4e5c-bec7-847ffd18dc36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 19:56:07.242901  412831 system_pods.go:89] "kube-proxy-2bxgd" [32906a03-9bcd-402b-948d-bcc65caa49fc] Running
	I1202 19:56:07.242907  412831 system_pods.go:89] "kube-scheduler-addons-893295" [85e7b347-9ab5-45c7-a9d3-2f9cdb139280] Running
	I1202 19:56:07.242915  412831 system_pods.go:89] "metrics-server-85b7d694d7-fbhzv" [51840c60-3fa9-4717-85ec-69d3082c6537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 19:56:07.242923  412831 system_pods.go:89] "nvidia-device-plugin-daemonset-bkjsl" [ee51e4e2-139f-407a-a020-b6a91e40e7bf] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 19:56:07.242934  412831 system_pods.go:89] "registry-6b586f9694-86wz6" [8dd65e02-986d-4a9b-9796-d9014d33d6d4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 19:56:07.242942  412831 system_pods.go:89] "registry-creds-764b6fb674-qwrlk" [242299e3-e588-4f0a-890d-da4c53cafcce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 19:56:07.242950  412831 system_pods.go:89] "registry-proxy-stnrw" [e1efa7a9-b967-4abf-8104-14eb332f881f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 19:56:07.242958  412831 system_pods.go:89] "snapshot-controller-7d9fbc56b8-57ls2" [808c977f-4e69-4d1b-ba59-e82fe31100c7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 19:56:07.242968  412831 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kwz4l" [7b98a4b5-96b5-4d1d-b6c6-983f165030db] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 19:56:07.242973  412831 system_pods.go:89] "storage-provisioner" [d1b4b030-354a-45e2-aa34-ff9768a43e99] Running
	I1202 19:56:07.242986  412831 system_pods.go:126] duration metric: took 978.660182ms to wait for k8s-apps to be running ...
	I1202 19:56:07.242995  412831 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 19:56:07.243059  412831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:56:07.261962  412831 system_svc.go:56] duration metric: took 18.953254ms WaitForService to wait for kubelet
	I1202 19:56:07.262001  412831 kubeadm.go:587] duration metric: took 42.574009753s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:56:07.262028  412831 node_conditions.go:102] verifying NodePressure condition ...
	I1202 19:56:07.265514  412831 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 19:56:07.265541  412831 node_conditions.go:123] node cpu capacity is 8
	I1202 19:56:07.265558  412831 node_conditions.go:105] duration metric: took 3.525254ms to run NodePressure ...
	I1202 19:56:07.265573  412831 start.go:242] waiting for startup goroutines ...
	I1202 19:56:07.340815  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:07.340855  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:07.568918  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:07.599307  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:07.842830  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:07.842884  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:08.069425  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:08.099340  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:08.341255  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:08.341397  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:08.568725  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:08.599656  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:08.840164  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:08.840328  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:09.068704  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:09.098944  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:09.344266  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:09.344504  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:09.568987  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:09.599328  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:09.841730  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:09.841921  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:10.069056  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:10.099746  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:10.340362  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:10.340382  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:10.568858  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:10.599039  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:10.841423  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:10.842310  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:11.068254  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:11.099791  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:11.340413  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:11.340514  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:11.568533  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:11.599838  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:11.840480  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:11.840572  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:12.068780  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:12.099020  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:12.340521  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:12.340554  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:12.568895  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:12.599457  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:12.842263  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:12.842268  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:13.068500  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:13.099472  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:13.340917  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:13.340970  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:13.569179  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:13.599497  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:13.841521  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:13.841785  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:14.071641  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:14.098996  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:14.340999  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:14.341209  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:14.568019  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:14.598891  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:14.840738  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:14.840849  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:15.068579  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:15.099643  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:15.340261  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:15.340320  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:15.568710  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:15.669382  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:15.840826  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:15.840826  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:16.068685  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:16.098613  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:16.340116  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:16.340272  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:16.568119  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:16.599129  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:16.841300  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:16.841405  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:17.068718  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:17.098921  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:17.340635  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:17.340669  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:17.568885  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:17.599347  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:17.841122  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:17.841259  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:18.097236  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:18.098678  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:18.460302  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:18.460990  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:18.568582  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:18.599579  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:18.841814  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:18.841916  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:19.069160  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:19.099318  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:19.341132  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:19.341283  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:19.568630  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:19.598740  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:19.840321  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:19.840355  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:20.068921  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:20.099195  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:20.340606  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:20.340628  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:20.568373  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:20.599451  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:20.840166  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:20.840263  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:21.068272  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:21.099287  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:21.341896  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:21.342204  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:21.568573  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:21.599662  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:21.840232  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:21.840322  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:22.069160  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:22.099055  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:22.340697  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:22.340859  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:22.568684  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:22.598726  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:22.840963  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:22.841024  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:23.068045  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:23.099265  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:23.340933  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:23.340935  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:23.569138  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:23.599251  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:23.840365  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:23.840383  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:24.068540  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:24.099836  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:24.340486  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:24.340623  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:24.569135  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:24.598769  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:24.840789  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:24.840976  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:25.068882  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:25.098947  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:25.341123  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:25.341147  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:25.567881  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:25.598842  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:25.841233  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:25.841253  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:26.069607  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:26.099838  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:26.341045  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:26.341115  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:26.568922  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:26.598989  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:26.841452  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:26.841473  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:27.068875  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:27.099373  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:27.355846  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:27.355863  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:27.569786  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:27.598774  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:27.840778  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:27.840964  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:28.068660  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:28.170093  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:28.340875  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:28.341040  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:28.570325  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:28.599460  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:28.841411  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:28.841685  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:29.068824  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:29.169578  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:29.339794  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:29.339835  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:29.568706  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:29.598648  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:29.840250  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:29.840245  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:30.068121  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:30.098992  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:30.340586  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:30.340661  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:30.569062  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:30.600247  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:30.841287  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:30.841758  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:31.068121  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:31.099370  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:31.341472  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:31.341495  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:31.569114  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:31.599606  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:31.840489  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:31.840624  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:32.069063  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:32.099241  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:32.341320  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:32.341334  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:32.568537  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:32.600117  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:32.840788  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:32.840968  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:33.069406  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:33.099501  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:33.341156  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:33.341201  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:33.567803  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:33.598748  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:33.844421  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:33.844439  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:34.068417  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:34.099870  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:34.340175  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:34.340217  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:34.568449  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:34.599539  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:34.840453  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:34.840516  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:35.068263  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:35.099487  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:35.341115  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:35.341340  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:35.567965  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:35.598959  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:35.840564  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:35.840738  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:36.068717  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:36.099529  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:36.340587  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:36.340592  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:36.568620  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:36.599734  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:36.841124  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:36.841258  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:37.068923  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:37.099134  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:37.340980  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:37.340978  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:37.569331  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:37.599499  412831 kapi.go:107] duration metric: took 1m11.503979432s to wait for kubernetes.io/minikube-addons=registry ...
	I1202 19:56:37.841288  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:37.841318  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:38.068570  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:38.340263  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:38.340273  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:38.568808  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:38.841479  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:38.841517  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:39.067850  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:39.340450  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:39.340528  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:39.570112  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:39.840869  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:39.841713  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:40.069886  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:40.343121  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:40.344337  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:40.569583  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:40.877178  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:40.877194  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:41.068404  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:41.341412  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:41.341725  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:41.568707  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:41.840345  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:41.840559  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:42.069090  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:42.340982  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:42.341116  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:42.569259  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:42.843354  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:42.843380  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:43.068223  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:43.341388  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:43.341479  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:43.568455  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:43.839770  412831 kapi.go:107] duration metric: took 1m17.003256013s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1202 19:56:43.840180  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:44.069742  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:44.343306  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:44.592121  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:44.840891  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:45.069409  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:45.341162  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:45.567927  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:45.840804  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:46.069496  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:46.340441  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:46.568007  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:46.840930  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:47.068408  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:47.341031  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:47.567859  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:47.840129  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:48.067855  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:48.340746  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:48.570182  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:48.840245  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:49.069351  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:49.341301  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:49.569384  412831 kapi.go:107] duration metric: took 1m16.504581331s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1202 19:56:49.571417  412831 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-893295 cluster.
	I1202 19:56:49.572750  412831 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1202 19:56:49.574338  412831 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1202 19:56:49.840570  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:50.340445  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:50.873583  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:51.341408  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:51.840861  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:52.341898  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:52.841211  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:53.340469  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:53.840945  412831 kapi.go:107] duration metric: took 1m27.00444127s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1202 19:56:53.842506  412831 out.go:179] * Enabled addons: ingress-dns, nvidia-device-plugin, cloud-spanner, default-storageclass, inspektor-gadget, amd-gpu-device-plugin, registry-creds, storage-provisioner, yakd, storage-provisioner-rancher, metrics-server, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1202 19:56:53.843651  412831 addons.go:530] duration metric: took 1m29.155734667s for enable addons: enabled=[ingress-dns nvidia-device-plugin cloud-spanner default-storageclass inspektor-gadget amd-gpu-device-plugin registry-creds storage-provisioner yakd storage-provisioner-rancher metrics-server volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1202 19:56:53.843706  412831 start.go:247] waiting for cluster config update ...
	I1202 19:56:53.843737  412831 start.go:256] writing updated cluster config ...
	I1202 19:56:53.844053  412831 ssh_runner.go:195] Run: rm -f paused
	I1202 19:56:53.848462  412831 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 19:56:53.851766  412831 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9mvmk" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:56:53.856792  412831 pod_ready.go:94] pod "coredns-66bc5c9577-9mvmk" is "Ready"
	I1202 19:56:53.856820  412831 pod_ready.go:86] duration metric: took 5.031488ms for pod "coredns-66bc5c9577-9mvmk" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:56:53.859208  412831 pod_ready.go:83] waiting for pod "etcd-addons-893295" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:56:53.863133  412831 pod_ready.go:94] pod "etcd-addons-893295" is "Ready"
	I1202 19:56:53.863165  412831 pod_ready.go:86] duration metric: took 3.93138ms for pod "etcd-addons-893295" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:56:53.865500  412831 pod_ready.go:83] waiting for pod "kube-apiserver-addons-893295" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:56:53.869547  412831 pod_ready.go:94] pod "kube-apiserver-addons-893295" is "Ready"
	I1202 19:56:53.869575  412831 pod_ready.go:86] duration metric: took 4.044043ms for pod "kube-apiserver-addons-893295" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:56:53.871548  412831 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-893295" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:56:54.252801  412831 pod_ready.go:94] pod "kube-controller-manager-addons-893295" is "Ready"
	I1202 19:56:54.252830  412831 pod_ready.go:86] duration metric: took 381.260599ms for pod "kube-controller-manager-addons-893295" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:56:54.453773  412831 pod_ready.go:83] waiting for pod "kube-proxy-2bxgd" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:56:54.852745  412831 pod_ready.go:94] pod "kube-proxy-2bxgd" is "Ready"
	I1202 19:56:54.852783  412831 pod_ready.go:86] duration metric: took 398.979558ms for pod "kube-proxy-2bxgd" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:56:55.056082  412831 pod_ready.go:83] waiting for pod "kube-scheduler-addons-893295" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:56:55.452727  412831 pod_ready.go:94] pod "kube-scheduler-addons-893295" is "Ready"
	I1202 19:56:55.452763  412831 pod_ready.go:86] duration metric: took 396.644943ms for pod "kube-scheduler-addons-893295" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:56:55.452778  412831 pod_ready.go:40] duration metric: took 1.604275769s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 19:56:55.497587  412831 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 19:56:55.500390  412831 out.go:179] * Done! kubectl is now configured to use "addons-893295" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 19:58:24 addons-893295 crio[767]: time="2025-12-02T19:58:24.037008845Z" level=info msg="Pulling image: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=63898426-9e99-4c30-84d7-bbce1cd5057d name=/runtime.v1.ImageService/PullImage
	Dec 02 19:58:24 addons-893295 crio[767]: time="2025-12-02T19:58:24.041386671Z" level=info msg="Trying to access \"docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605\""
	Dec 02 19:58:25 addons-893295 crio[767]: time="2025-12-02T19:58:25.595675484Z" level=info msg="Pulled image: docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=63898426-9e99-4c30-84d7-bbce1cd5057d name=/runtime.v1.ImageService/PullImage
	Dec 02 19:58:25 addons-893295 crio[767]: time="2025-12-02T19:58:25.59642344Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=b69d475c-ca93-471b-9de5-990822153671 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:58:25 addons-893295 crio[767]: time="2025-12-02T19:58:25.630536395Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=2055417a-ea05-4f9b-8240-76b4819a11f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:58:25 addons-893295 crio[767]: time="2025-12-02T19:58:25.635273366Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-qwrlk/registry-creds" id=cf08a678-249b-476c-a69e-25685bfb1693 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 19:58:25 addons-893295 crio[767]: time="2025-12-02T19:58:25.635439836Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:58:25 addons-893295 crio[767]: time="2025-12-02T19:58:25.642725132Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:58:25 addons-893295 crio[767]: time="2025-12-02T19:58:25.643419254Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:58:25 addons-893295 crio[767]: time="2025-12-02T19:58:25.678261901Z" level=info msg="Created container 02523dcf96b4c0fc67c8513293234b22d238b2e7ec48d1a83e5da1a3f69bdb62: kube-system/registry-creds-764b6fb674-qwrlk/registry-creds" id=cf08a678-249b-476c-a69e-25685bfb1693 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 19:58:25 addons-893295 crio[767]: time="2025-12-02T19:58:25.678916751Z" level=info msg="Starting container: 02523dcf96b4c0fc67c8513293234b22d238b2e7ec48d1a83e5da1a3f69bdb62" id=52fb3f6c-76d7-4ca9-8067-b00c517acd16 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 19:58:25 addons-893295 crio[767]: time="2025-12-02T19:58:25.680802344Z" level=info msg="Started container" PID=8817 containerID=02523dcf96b4c0fc67c8513293234b22d238b2e7ec48d1a83e5da1a3f69bdb62 description=kube-system/registry-creds-764b6fb674-qwrlk/registry-creds id=52fb3f6c-76d7-4ca9-8067-b00c517acd16 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2766c048120e1017565e2084b7bcaf45fedd2eb8f757cacf7052beb2ad4aa7c8
	Dec 02 19:59:45 addons-893295 crio[767]: time="2025-12-02T19:59:45.838266293Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-phgqz/POD" id=552d94ed-d364-4cb9-81bd-10d7d64aefff name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 19:59:45 addons-893295 crio[767]: time="2025-12-02T19:59:45.838339498Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:59:45 addons-893295 crio[767]: time="2025-12-02T19:59:45.84551199Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-phgqz Namespace:default ID:ea8166c921d0567aaced36d50778589be059cafc4b9c9a77fe098e5a9d5ee41e UID:8cfeee29-2840-442d-af57-ee9403f418e2 NetNS:/var/run/netns/5eaeb9ae-6bd7-4f4b-9e65-0f664dadbc18 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008aef0}] Aliases:map[]}"
	Dec 02 19:59:45 addons-893295 crio[767]: time="2025-12-02T19:59:45.845551829Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-phgqz to CNI network \"kindnet\" (type=ptp)"
	Dec 02 19:59:45 addons-893295 crio[767]: time="2025-12-02T19:59:45.856288914Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-phgqz Namespace:default ID:ea8166c921d0567aaced36d50778589be059cafc4b9c9a77fe098e5a9d5ee41e UID:8cfeee29-2840-442d-af57-ee9403f418e2 NetNS:/var/run/netns/5eaeb9ae-6bd7-4f4b-9e65-0f664dadbc18 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008aef0}] Aliases:map[]}"
	Dec 02 19:59:45 addons-893295 crio[767]: time="2025-12-02T19:59:45.856442135Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-phgqz for CNI network kindnet (type=ptp)"
	Dec 02 19:59:45 addons-893295 crio[767]: time="2025-12-02T19:59:45.857396873Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 19:59:45 addons-893295 crio[767]: time="2025-12-02T19:59:45.858227132Z" level=info msg="Ran pod sandbox ea8166c921d0567aaced36d50778589be059cafc4b9c9a77fe098e5a9d5ee41e with infra container: default/hello-world-app-5d498dc89-phgqz/POD" id=552d94ed-d364-4cb9-81bd-10d7d64aefff name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 19:59:45 addons-893295 crio[767]: time="2025-12-02T19:59:45.859671524Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=40abc35a-8708-48c9-8c35-a2dd5d39f89c name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:59:45 addons-893295 crio[767]: time="2025-12-02T19:59:45.859847711Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=40abc35a-8708-48c9-8c35-a2dd5d39f89c name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:59:45 addons-893295 crio[767]: time="2025-12-02T19:59:45.859899035Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=40abc35a-8708-48c9-8c35-a2dd5d39f89c name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:59:45 addons-893295 crio[767]: time="2025-12-02T19:59:45.860693125Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=4640926f-db93-4b21-bd5d-f8a67a49363f name=/runtime.v1.ImageService/PullImage
	Dec 02 19:59:45 addons-893295 crio[767]: time="2025-12-02T19:59:45.870229839Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	02523dcf96b4c       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago   Running             registry-creds                           0                   2766c048120e1       registry-creds-764b6fb674-qwrlk            kube-system
	d2089c03d7042       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago        Running             nginx                                    0                   d4cd943414e64       nginx                                      default
	4043884cb56dd       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago        Running             busybox                                  0                   bd4ce7d269bc5       busybox                                    default
	72a3a94a86154       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago        Running             csi-snapshotter                          0                   bfc8afb767584       csi-hostpathplugin-6h8dt                   kube-system
	3fc3b9c2bb546       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago        Running             csi-provisioner                          0                   bfc8afb767584       csi-hostpathplugin-6h8dt                   kube-system
	23592b1014e08       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago        Running             liveness-probe                           0                   bfc8afb767584       csi-hostpathplugin-6h8dt                   kube-system
	4873f6a4745b9       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago        Running             hostpath                                 0                   bfc8afb767584       csi-hostpathplugin-6h8dt                   kube-system
	a46f3e2f5f2db       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago        Running             gcp-auth                                 0                   3ad1bcb052644       gcp-auth-78565c9fb4-2jfqm                  gcp-auth
	69202c0144e36       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago        Running             node-driver-registrar                    0                   bfc8afb767584       csi-hostpathplugin-6h8dt                   kube-system
	25661eee6e26e       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            3 minutes ago        Running             gadget                                   0                   e06c27e61f63d       gadget-ps8xn                               gadget
	1206c9b45b619       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             3 minutes ago        Running             controller                               0                   3f3f6db8899f8       ingress-nginx-controller-6c8bf45fb-sjqdl   ingress-nginx
	e272a50ae70ce       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   3d6e29dddc59f       snapshot-controller-7d9fbc56b8-57ls2       kube-system
	343bfc0b495be       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   620bab088befd       snapshot-controller-7d9fbc56b8-kwz4l       kube-system
	c935f2bdad559       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago        Running             csi-resizer                              0                   a18a7a7a8a3df       csi-hostpath-resizer-0                     kube-system
	2021a9af4b97c       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago        Running             registry-proxy                           0                   fda091e962415       registry-proxy-stnrw                       kube-system
	7d3c2329b0b0c       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago        Running             amd-gpu-device-plugin                    0                   ba85d7878b417       amd-gpu-device-plugin-nklpz                kube-system
	33d9c5ffbca0f       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago        Running             csi-external-health-monitor-controller   0                   bfc8afb767584       csi-hostpathplugin-6h8dt                   kube-system
	c59167a3c785b       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   c2625225a9c92       nvidia-device-plugin-daemonset-bkjsl       kube-system
	22d14c28c8779       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             3 minutes ago        Exited              patch                                    2                   22789b8159171       ingress-nginx-admission-patch-szllz        ingress-nginx
	cb9cd75fae78a       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago        Running             yakd                                     0                   bc83c123410f9       yakd-dashboard-5ff678cb9-vvw4f             yakd-dashboard
	a852aad52763b       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago        Running             cloud-spanner-emulator                   0                   daa7ab6489f14       cloud-spanner-emulator-5bdddb765-jf7fb     default
	1d0670321bc4a       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   71fd3c5475d50       csi-hostpath-attacher-0                    kube-system
	b0b7cf0d49211       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago        Exited              create                                   0                   18b90e176390f       ingress-nginx-admission-create-bbp4j       ingress-nginx
	9012f9d6215d1       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago        Running             registry                                 0                   57f5ca7658334       registry-6b586f9694-86wz6                  kube-system
	c2442e5b2ee0f       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   3611fa6ce6525       local-path-provisioner-648f6765c9-pjsp5    local-path-storage
	457ec4512e89c       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   8f364ab437b38       kube-ingress-dns-minikube                  kube-system
	91253d86ed19b       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago        Running             metrics-server                           0                   d06f510aed173       metrics-server-85b7d694d7-fbhzv            kube-system
	1a4586dbac8e8       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago        Running             coredns                                  0                   5455c2b2c4e2c       coredns-66bc5c9577-9mvmk                   kube-system
	548b1d008679f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago        Running             storage-provisioner                      0                   4fe2877403abb       storage-provisioner                        kube-system
	92d33e649bb3a       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             4 minutes ago        Running             kube-proxy                               0                   b420dc7f912c9       kube-proxy-2bxgd                           kube-system
	36e4834af5630       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago        Running             kindnet-cni                              0                   1cb45e8f8d976       kindnet-bphsd                              kube-system
	1053c12fee90a       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             4 minutes ago        Running             kube-controller-manager                  0                   2d840971538c4       kube-controller-manager-addons-893295      kube-system
	87e76e15e8595       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             4 minutes ago        Running             etcd                                     0                   b57d6fa8076e3       etcd-addons-893295                         kube-system
	54de7a8ca3420       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             4 minutes ago        Running             kube-scheduler                           0                   fdfc3db85baa5       kube-scheduler-addons-893295               kube-system
	64bbafcaa8986       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             4 minutes ago        Running             kube-apiserver                           0                   015a7594dd1f5       kube-apiserver-addons-893295               kube-system
	
	
	==> coredns [1a4586dbac8e8d1828435d72cdf3947bd1869e463e0102cc7b6664ebbeddeacf] <==
	[INFO] 10.244.0.22:48819 - 33243 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000150311s
	[INFO] 10.244.0.22:58095 - 23803 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.005135561s
	[INFO] 10.244.0.22:46760 - 30608 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.006942093s
	[INFO] 10.244.0.22:57552 - 45957 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004772385s
	[INFO] 10.244.0.22:59923 - 41584 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004980206s
	[INFO] 10.244.0.22:42005 - 63360 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005137142s
	[INFO] 10.244.0.22:41974 - 23037 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005781662s
	[INFO] 10.244.0.22:52086 - 14928 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001165022s
	[INFO] 10.244.0.22:48586 - 59113 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.001357076s
	[INFO] 10.244.0.25:44507 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000256976s
	[INFO] 10.244.0.25:50480 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000180799s
	[INFO] 10.244.0.31:56154 - 38751 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000264801s
	[INFO] 10.244.0.31:57881 - 34508 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000386102s
	[INFO] 10.244.0.31:34563 - 48175 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000108529s
	[INFO] 10.244.0.31:43273 - 56048 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000140844s
	[INFO] 10.244.0.31:53550 - 22558 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000106817s
	[INFO] 10.244.0.31:43935 - 15835 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000097734s
	[INFO] 10.244.0.31:55570 - 21537 "AAAA IN accounts.google.com.europe-west2-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.005545493s
	[INFO] 10.244.0.31:46648 - 41318 "A IN accounts.google.com.europe-west2-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.005676545s
	[INFO] 10.244.0.31:42040 - 13578 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005316411s
	[INFO] 10.244.0.31:58321 - 52822 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.008138127s
	[INFO] 10.244.0.31:38360 - 56110 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004612806s
	[INFO] 10.244.0.31:59424 - 38603 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004900651s
	[INFO] 10.244.0.31:42584 - 47248 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001799155s
	[INFO] 10.244.0.31:33423 - 41549 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001916288s
	
	
	==> describe nodes <==
	Name:               addons-893295
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-893295
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=addons-893295
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T19_55_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-893295
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-893295"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 19:55:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-893295
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 19:59:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 19:58:53 +0000   Tue, 02 Dec 2025 19:55:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 19:58:53 +0000   Tue, 02 Dec 2025 19:55:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 19:58:53 +0000   Tue, 02 Dec 2025 19:55:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 19:58:53 +0000   Tue, 02 Dec 2025 19:56:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-893295
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                95635df6-c4bf-4028-a5ca-f3eeb7819f23
	  Boot ID:                    9dd5456f-e394-4b4b-9458-48d26faf507e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m51s
	  default                     cloud-spanner-emulator-5bdddb765-jf7fb      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  default                     hello-world-app-5d498dc89-phgqz             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  gadget                      gadget-ps8xn                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  gcp-auth                    gcp-auth-78565c9fb4-2jfqm                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-sjqdl    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m21s
	  kube-system                 amd-gpu-device-plugin-nklpz                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 coredns-66bc5c9577-9mvmk                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m23s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 csi-hostpathplugin-6h8dt                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 etcd-addons-893295                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m28s
	  kube-system                 kindnet-bphsd                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m24s
	  kube-system                 kube-apiserver-addons-893295                250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 kube-controller-manager-addons-893295       200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-proxy-2bxgd                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-scheduler-addons-893295                100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 metrics-server-85b7d694d7-fbhzv             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m22s
	  kube-system                 nvidia-device-plugin-daemonset-bkjsl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 registry-6b586f9694-86wz6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 registry-creds-764b6fb674-qwrlk             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 registry-proxy-stnrw                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 snapshot-controller-7d9fbc56b8-57ls2        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 snapshot-controller-7d9fbc56b8-kwz4l        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  local-path-storage          local-path-provisioner-648f6765c9-pjsp5     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-vvw4f              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m21s  kube-proxy       
	  Normal  Starting                 4m29s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m28s  kubelet          Node addons-893295 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m28s  kubelet          Node addons-893295 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m28s  kubelet          Node addons-893295 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m24s  node-controller  Node addons-893295 event: Registered Node addons-893295 in Controller
	  Normal  NodeReady                3m41s  kubelet          Node addons-893295 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e f4 c0 f2 56 fb 08 06
	[  +0.000355] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 95 9a 02 fc fb 08 06
	[Dec 2 19:57] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000013] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +1.020139] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +1.023921] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +1.023940] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +1.023881] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +2.047855] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +4.031797] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +8.191553] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[Dec 2 19:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[ +32.254238] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	
	
	==> etcd [87e76e15e8595d052d66a4d86ee8b1416a8f60a669646ecbdd55cf8343b8db42] <==
	{"level":"warn","ts":"2025-12-02T19:55:15.871704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T19:55:15.878822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T19:55:15.886403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T19:55:15.904357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T19:55:15.912703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T19:55:15.920479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T19:55:15.971242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T19:55:27.395829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T19:55:53.376273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T19:55:53.383344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T19:55:53.402502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T19:55:53.409631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60420","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T19:56:18.329346Z","caller":"traceutil/trace.go:172","msg":"trace[1739899132] transaction","detail":"{read_only:false; response_revision:1003; number_of_response:1; }","duration":"100.270745ms","start":"2025-12-02T19:56:18.229050Z","end":"2025-12-02T19:56:18.329321Z","steps":["trace[1739899132] 'process raft request'  (duration: 100.174567ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T19:56:18.458264Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.84542ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T19:56:18.458371Z","caller":"traceutil/trace.go:172","msg":"trace[212098468] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1003; }","duration":"119.983555ms","start":"2025-12-02T19:56:18.338371Z","end":"2025-12-02T19:56:18.458355Z","steps":["trace[212098468] 'agreement among raft nodes before linearized reading'  (duration: 48.051953ms)","trace[212098468] 'range keys from in-memory index tree'  (duration: 71.748553ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T19:56:18.458447Z","caller":"traceutil/trace.go:172","msg":"trace[2027857323] transaction","detail":"{read_only:false; response_revision:1004; number_of_response:1; }","duration":"124.949865ms","start":"2025-12-02T19:56:18.333472Z","end":"2025-12-02T19:56:18.458421Z","steps":["trace[2027857323] 'process raft request'  (duration: 52.984946ms)","trace[2027857323] 'compare'  (duration: 71.783925ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T19:56:18.458478Z","caller":"traceutil/trace.go:172","msg":"trace[177507218] transaction","detail":"{read_only:false; response_revision:1006; number_of_response:1; }","duration":"124.585103ms","start":"2025-12-02T19:56:18.333882Z","end":"2025-12-02T19:56:18.458467Z","steps":["trace[177507218] 'process raft request'  (duration: 124.54011ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T19:56:18.458497Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.095405ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T19:56:18.458537Z","caller":"traceutil/trace.go:172","msg":"trace[1486270404] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1006; }","duration":"120.140912ms","start":"2025-12-02T19:56:18.338385Z","end":"2025-12-02T19:56:18.458526Z","steps":["trace[1486270404] 'agreement among raft nodes before linearized reading'  (duration: 120.054031ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:56:18.458509Z","caller":"traceutil/trace.go:172","msg":"trace[1377585880] transaction","detail":"{read_only:false; response_revision:1005; number_of_response:1; }","duration":"125.023531ms","start":"2025-12-02T19:56:18.333472Z","end":"2025-12-02T19:56:18.458495Z","steps":["trace[1377585880] 'process raft request'  (duration: 124.893619ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:56:18.497636Z","caller":"traceutil/trace.go:172","msg":"trace[2023940001] transaction","detail":"{read_only:false; response_revision:1007; number_of_response:1; }","duration":"107.519025ms","start":"2025-12-02T19:56:18.390102Z","end":"2025-12-02T19:56:18.497621Z","steps":["trace[2023940001] 'process raft request'  (duration: 107.424441ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:56:27.505602Z","caller":"traceutil/trace.go:172","msg":"trace[613967848] transaction","detail":"{read_only:false; response_revision:1051; number_of_response:1; }","duration":"147.690619ms","start":"2025-12-02T19:56:27.357890Z","end":"2025-12-02T19:56:27.505580Z","steps":["trace[613967848] 'process raft request'  (duration: 76.657046ms)","trace[613967848] 'compare'  (duration: 70.918231ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T19:56:43.019714Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.499896ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T19:56:43.019795Z","caller":"traceutil/trace.go:172","msg":"trace[459523527] range","detail":"{range_begin:/registry/daemonsets; range_end:; response_count:0; response_revision:1148; }","duration":"112.601042ms","start":"2025-12-02T19:56:42.907179Z","end":"2025-12-02T19:56:43.019780Z","steps":["trace[459523527] 'range keys from in-memory index tree'  (duration: 112.427491ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:56:51.201122Z","caller":"traceutil/trace.go:172","msg":"trace[1378131373] transaction","detail":"{read_only:false; response_revision:1208; number_of_response:1; }","duration":"120.264416ms","start":"2025-12-02T19:56:51.080832Z","end":"2025-12-02T19:56:51.201096Z","steps":["trace[1378131373] 'process raft request'  (duration: 39.579953ms)","trace[1378131373] 'compare'  (duration: 80.384372ms)"],"step_count":2}
	
	
	==> gcp-auth [a46f3e2f5f2db824560ef63faaba0a67cdf308ef0ead014b67f26c5a5f5b3d67] <==
	2025/12/02 19:56:49 GCP Auth Webhook started!
	2025/12/02 19:56:55 Ready to marshal response ...
	2025/12/02 19:56:55 Ready to write response ...
	2025/12/02 19:56:56 Ready to marshal response ...
	2025/12/02 19:56:56 Ready to write response ...
	2025/12/02 19:56:56 Ready to marshal response ...
	2025/12/02 19:56:56 Ready to write response ...
	2025/12/02 19:57:11 Ready to marshal response ...
	2025/12/02 19:57:11 Ready to write response ...
	2025/12/02 19:57:11 Ready to marshal response ...
	2025/12/02 19:57:11 Ready to write response ...
	2025/12/02 19:57:15 Ready to marshal response ...
	2025/12/02 19:57:15 Ready to write response ...
	2025/12/02 19:57:20 Ready to marshal response ...
	2025/12/02 19:57:20 Ready to write response ...
	2025/12/02 19:57:22 Ready to marshal response ...
	2025/12/02 19:57:22 Ready to write response ...
	2025/12/02 19:57:24 Ready to marshal response ...
	2025/12/02 19:57:24 Ready to write response ...
	2025/12/02 19:57:52 Ready to marshal response ...
	2025/12/02 19:57:52 Ready to write response ...
	2025/12/02 19:59:45 Ready to marshal response ...
	2025/12/02 19:59:45 Ready to write response ...
	
	
	==> kernel <==
	 19:59:47 up  1:42,  0 user,  load average: 0.23, 1.23, 1.78
	Linux addons-893295 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [36e4834af56306fb62768ad3dbe8f24dbfa293561c0c41bee1c6d418ce06f454] <==
	I1202 19:57:45.820729       1 main.go:301] handling current node
	I1202 19:57:55.817122       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:57:55.817158       1 main.go:301] handling current node
	I1202 19:58:05.817029       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:58:05.817142       1 main.go:301] handling current node
	I1202 19:58:15.817050       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:58:15.817116       1 main.go:301] handling current node
	I1202 19:58:25.818372       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:58:25.818412       1 main.go:301] handling current node
	I1202 19:58:35.821228       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:58:35.821259       1 main.go:301] handling current node
	I1202 19:58:45.825391       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:58:45.825427       1 main.go:301] handling current node
	I1202 19:58:55.817731       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:58:55.817782       1 main.go:301] handling current node
	I1202 19:59:05.823760       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:59:05.823799       1 main.go:301] handling current node
	I1202 19:59:15.824969       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:59:15.825005       1 main.go:301] handling current node
	I1202 19:59:25.817667       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:59:25.817708       1 main.go:301] handling current node
	I1202 19:59:35.823271       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:59:35.823306       1 main.go:301] handling current node
	I1202 19:59:45.817375       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:59:45.817406       1 main.go:301] handling current node
	
	
	==> kube-apiserver [64bbafcaa8986f6e93390db3e1aa160fe3cecdd54cdd91e940adc5db87fefb45] <==
	W1202 19:55:53.409525       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1202 19:56:06.124380       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.102.153:443: connect: connection refused
	E1202 19:56:06.125962       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.102.153:443: connect: connection refused" logger="UnhandledError"
	W1202 19:56:06.125291       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.102.153:443: connect: connection refused
	E1202 19:56:06.126055       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.102.153:443: connect: connection refused" logger="UnhandledError"
	W1202 19:56:06.150918       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.102.153:443: connect: connection refused
	E1202 19:56:06.150959       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.102.153:443: connect: connection refused" logger="UnhandledError"
	W1202 19:56:06.151059       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.102.153:443: connect: connection refused
	E1202 19:56:06.151121       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.102.153:443: connect: connection refused" logger="UnhandledError"
	E1202 19:56:09.220298       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.25.184:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.25.184:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.25.184:443: connect: connection refused" logger="UnhandledError"
	W1202 19:56:09.220311       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 19:56:09.220460       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1202 19:56:09.221088       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.25.184:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.25.184:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.25.184:443: connect: connection refused" logger="UnhandledError"
	E1202 19:56:09.226487       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.25.184:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.25.184:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.25.184:443: connect: connection refused" logger="UnhandledError"
	E1202 19:56:09.247916       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.25.184:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.25.184:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.25.184:443: connect: connection refused" logger="UnhandledError"
	I1202 19:56:09.341377       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1202 19:57:05.188156       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51502: use of closed network connection
	E1202 19:57:05.342315       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51540: use of closed network connection
	I1202 19:57:20.276488       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1202 19:57:20.472291       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.47.122"}
	I1202 19:57:35.650865       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1202 19:59:45.615022       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.124.240"}
	
	
	==> kube-controller-manager [1053c12fee90a817e22976e0dc30541fb27c049e02c7c5af353833a13b30e982] <==
	I1202 19:55:23.360550       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 19:55:23.361805       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1202 19:55:23.363897       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1202 19:55:23.363934       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1202 19:55:23.363977       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1202 19:55:23.364006       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1202 19:55:23.364016       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1202 19:55:23.364023       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1202 19:55:23.364149       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 19:55:23.365395       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 19:55:23.370997       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-893295" podCIDRs=["10.244.0.0/24"]
	I1202 19:55:23.374126       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1202 19:55:23.384971       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 19:55:23.389179       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 19:55:23.389206       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 19:55:23.389215       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1202 19:55:25.987978       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1202 19:55:53.369748       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 19:55:53.369951       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1202 19:55:53.370005       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1202 19:55:53.392552       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1202 19:55:53.396673       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1202 19:55:53.471085       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 19:55:53.497428       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 19:56:08.366207       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [92d33e649bb3a4d1fbfddebd2c29786df9d0f9bbe8c4df37c931d3fb4cae82a7] <==
	I1202 19:55:25.649054       1 server_linux.go:53] "Using iptables proxy"
	I1202 19:55:25.882566       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 19:55:25.988575       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 19:55:25.988673       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 19:55:25.988802       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 19:55:26.120262       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 19:55:26.120418       1 server_linux.go:132] "Using iptables Proxier"
	I1202 19:55:26.128948       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 19:55:26.129924       1 server.go:527] "Version info" version="v1.34.2"
	I1202 19:55:26.130196       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 19:55:26.160522       1 config.go:200] "Starting service config controller"
	I1202 19:55:26.160612       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 19:55:26.160756       1 config.go:106] "Starting endpoint slice config controller"
	I1202 19:55:26.160918       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 19:55:26.161001       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 19:55:26.161028       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 19:55:26.162132       1 config.go:309] "Starting node config controller"
	I1202 19:55:26.162200       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 19:55:26.260801       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 19:55:26.261946       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 19:55:26.261967       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 19:55:26.263481       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [54de7a8ca3420358423254cbf3d9a5a5e7140b7f46e22139375e40856000099c] <==
	E1202 19:55:16.388944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 19:55:16.389121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 19:55:16.389105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 19:55:16.389311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 19:55:16.389361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 19:55:16.389377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 19:55:16.389499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 19:55:16.389503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 19:55:16.389579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 19:55:16.389620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 19:55:16.389632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 19:55:16.389667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 19:55:16.389673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 19:55:17.267586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 19:55:17.268580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 19:55:17.282331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 19:55:17.293220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 19:55:17.321592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 19:55:17.351793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 19:55:17.360020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1202 19:55:17.444948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 19:55:17.487657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 19:55:17.501893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 19:55:17.541246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1202 19:55:20.386177       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 19:58:00 addons-893295 kubelet[1277]: I1202 19:58:00.355547    1277 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^2f050954-cfb9-11f0-ade9-5e57e1aab7cf\") pod \"ec77a64b-cf94-4e6d-b3b6-eac74cec8b54\" (UID: \"ec77a64b-cf94-4e6d-b3b6-eac74cec8b54\") "
	Dec 02 19:58:00 addons-893295 kubelet[1277]: I1202 19:58:00.355603    1277 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d5l4\" (UniqueName: \"kubernetes.io/projected/ec77a64b-cf94-4e6d-b3b6-eac74cec8b54-kube-api-access-9d5l4\") pod \"ec77a64b-cf94-4e6d-b3b6-eac74cec8b54\" (UID: \"ec77a64b-cf94-4e6d-b3b6-eac74cec8b54\") "
	Dec 02 19:58:00 addons-893295 kubelet[1277]: I1202 19:58:00.355750    1277 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ec77a64b-cf94-4e6d-b3b6-eac74cec8b54-gcp-creds\") on node \"addons-893295\" DevicePath \"\""
	Dec 02 19:58:00 addons-893295 kubelet[1277]: I1202 19:58:00.358186    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec77a64b-cf94-4e6d-b3b6-eac74cec8b54-kube-api-access-9d5l4" (OuterVolumeSpecName: "kube-api-access-9d5l4") pod "ec77a64b-cf94-4e6d-b3b6-eac74cec8b54" (UID: "ec77a64b-cf94-4e6d-b3b6-eac74cec8b54"). InnerVolumeSpecName "kube-api-access-9d5l4". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 02 19:58:00 addons-893295 kubelet[1277]: I1202 19:58:00.358963    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^2f050954-cfb9-11f0-ade9-5e57e1aab7cf" (OuterVolumeSpecName: "task-pv-storage") pod "ec77a64b-cf94-4e6d-b3b6-eac74cec8b54" (UID: "ec77a64b-cf94-4e6d-b3b6-eac74cec8b54"). InnerVolumeSpecName "pvc-e08c2175-1cf8-468b-a854-1aa0ad2c2a78". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 02 19:58:00 addons-893295 kubelet[1277]: I1202 19:58:00.457052    1277 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-e08c2175-1cf8-468b-a854-1aa0ad2c2a78\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^2f050954-cfb9-11f0-ade9-5e57e1aab7cf\") on node \"addons-893295\" "
	Dec 02 19:58:00 addons-893295 kubelet[1277]: I1202 19:58:00.457140    1277 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9d5l4\" (UniqueName: \"kubernetes.io/projected/ec77a64b-cf94-4e6d-b3b6-eac74cec8b54-kube-api-access-9d5l4\") on node \"addons-893295\" DevicePath \"\""
	Dec 02 19:58:00 addons-893295 kubelet[1277]: I1202 19:58:00.461834    1277 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-e08c2175-1cf8-468b-a854-1aa0ad2c2a78" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^2f050954-cfb9-11f0-ade9-5e57e1aab7cf") on node "addons-893295"
	Dec 02 19:58:00 addons-893295 kubelet[1277]: I1202 19:58:00.558222    1277 reconciler_common.go:299] "Volume detached for volume \"pvc-e08c2175-1cf8-468b-a854-1aa0ad2c2a78\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^2f050954-cfb9-11f0-ade9-5e57e1aab7cf\") on node \"addons-893295\" DevicePath \"\""
	Dec 02 19:58:00 addons-893295 kubelet[1277]: I1202 19:58:00.735589    1277 scope.go:117] "RemoveContainer" containerID="33e200fd0579ca3d22fdf56c6d69e163ef76d307921e944a322a1c97ffefc1f0"
	Dec 02 19:58:00 addons-893295 kubelet[1277]: I1202 19:58:00.744852    1277 scope.go:117] "RemoveContainer" containerID="33e200fd0579ca3d22fdf56c6d69e163ef76d307921e944a322a1c97ffefc1f0"
	Dec 02 19:58:00 addons-893295 kubelet[1277]: E1202 19:58:00.745410    1277 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33e200fd0579ca3d22fdf56c6d69e163ef76d307921e944a322a1c97ffefc1f0\": container with ID starting with 33e200fd0579ca3d22fdf56c6d69e163ef76d307921e944a322a1c97ffefc1f0 not found: ID does not exist" containerID="33e200fd0579ca3d22fdf56c6d69e163ef76d307921e944a322a1c97ffefc1f0"
	Dec 02 19:58:00 addons-893295 kubelet[1277]: I1202 19:58:00.745464    1277 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33e200fd0579ca3d22fdf56c6d69e163ef76d307921e944a322a1c97ffefc1f0"} err="failed to get container status \"33e200fd0579ca3d22fdf56c6d69e163ef76d307921e944a322a1c97ffefc1f0\": rpc error: code = NotFound desc = could not find container \"33e200fd0579ca3d22fdf56c6d69e163ef76d307921e944a322a1c97ffefc1f0\": container with ID starting with 33e200fd0579ca3d22fdf56c6d69e163ef76d307921e944a322a1c97ffefc1f0 not found: ID does not exist"
	Dec 02 19:58:01 addons-893295 kubelet[1277]: I1202 19:58:01.014323    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-nklpz" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 19:58:01 addons-893295 kubelet[1277]: I1202 19:58:01.017401    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec77a64b-cf94-4e6d-b3b6-eac74cec8b54" path="/var/lib/kubelet/pods/ec77a64b-cf94-4e6d-b3b6-eac74cec8b54/volumes"
	Dec 02 19:58:03 addons-893295 kubelet[1277]: I1202 19:58:03.013665    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-stnrw" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 19:58:09 addons-893295 kubelet[1277]: E1202 19:58:09.166603    1277 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-qwrlk" podUID="242299e3-e588-4f0a-890d-da4c53cafcce"
	Dec 02 19:58:19 addons-893295 kubelet[1277]: I1202 19:58:19.045833    1277 scope.go:117] "RemoveContainer" containerID="11f5f8d7e7fcb81b5f013a5452019f9bb0057306fb1abaaa9780b53abb84e785"
	Dec 02 19:58:19 addons-893295 kubelet[1277]: I1202 19:58:19.054603    1277 scope.go:117] "RemoveContainer" containerID="2bd7583288783bcf0b85082be36d561344caa0fb468563f27d1973996f081ec7"
	Dec 02 19:58:25 addons-893295 kubelet[1277]: I1202 19:58:25.853780    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-qwrlk" podStartSLOduration=179.292718556 podStartE2EDuration="3m0.853756474s" podCreationTimestamp="2025-12-02 19:55:25 +0000 UTC" firstStartedPulling="2025-12-02 19:58:24.03664081 +0000 UTC m=+185.112589784" lastFinishedPulling="2025-12-02 19:58:25.597678722 +0000 UTC m=+186.673627702" observedRunningTime="2025-12-02 19:58:25.852925233 +0000 UTC m=+186.928874247" watchObservedRunningTime="2025-12-02 19:58:25.853756474 +0000 UTC m=+186.929705460"
	Dec 02 19:58:57 addons-893295 kubelet[1277]: I1202 19:58:57.013714    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-bkjsl" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 19:59:21 addons-893295 kubelet[1277]: I1202 19:59:21.014555    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-stnrw" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 19:59:24 addons-893295 kubelet[1277]: I1202 19:59:24.014407    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-nklpz" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 19:59:45 addons-893295 kubelet[1277]: I1202 19:59:45.582753    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8cfeee29-2840-442d-af57-ee9403f418e2-gcp-creds\") pod \"hello-world-app-5d498dc89-phgqz\" (UID: \"8cfeee29-2840-442d-af57-ee9403f418e2\") " pod="default/hello-world-app-5d498dc89-phgqz"
	Dec 02 19:59:45 addons-893295 kubelet[1277]: I1202 19:59:45.582838    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwtxc\" (UniqueName: \"kubernetes.io/projected/8cfeee29-2840-442d-af57-ee9403f418e2-kube-api-access-rwtxc\") pod \"hello-world-app-5d498dc89-phgqz\" (UID: \"8cfeee29-2840-442d-af57-ee9403f418e2\") " pod="default/hello-world-app-5d498dc89-phgqz"
	
	
	==> storage-provisioner [548b1d008679ffab8ee06c2f360e832860d3a7904cf593b25d31675b7bb892f9] <==
	W1202 19:59:21.584269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:59:23.587205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:59:23.591133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:59:25.594161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:59:25.598373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:59:27.602330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:59:27.607293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:59:29.610380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:59:29.614911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:59:31.618486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:59:31.624806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:59:33.628478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:59:33.634049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:59:35.637361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:59:35.641916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:59:37.645755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:59:37.650087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:59:39.653261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:59:39.658289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:59:41.661984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:59:41.666670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:59:43.670035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:59:43.674303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:59:45.679295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:59:45.692098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-893295 -n addons-893295
helpers_test.go:269: (dbg) Run:  kubectl --context addons-893295 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-phgqz ingress-nginx-admission-create-bbp4j ingress-nginx-admission-patch-szllz
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-893295 describe pod hello-world-app-5d498dc89-phgqz ingress-nginx-admission-create-bbp4j ingress-nginx-admission-patch-szllz
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-893295 describe pod hello-world-app-5d498dc89-phgqz ingress-nginx-admission-create-bbp4j ingress-nginx-admission-patch-szllz: exit status 1 (83.515441ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-phgqz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-893295/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 19:59:45 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Running
	IP:               10.244.0.32
	IPs:
	  IP:           10.244.0.32
	Controlled By:  ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   cri-o://3209b33de841bfe21887e2e55b66134529c61b5f861e5bf62bfdb215709c5279
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Running
	      Started:      Tue, 02 Dec 2025 19:59:47 +0000
	    Ready:          True
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rwtxc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       True 
	  ContainersReady             True 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rwtxc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-phgqz to addons-893295
	  Normal  Pulling    3s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Normal  Pulled     1s    kubelet            Successfully pulled image "docker.io/kicbase/echo-server:1.0" in 1.583s (1.583s including waiting). Image size: 4944818 bytes.
	  Normal  Created    1s    kubelet            Created container: hello-world-app
	  Normal  Started    1s    kubelet            Started container hello-world-app

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-bbp4j" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-szllz" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-893295 describe pod hello-world-app-5d498dc89-phgqz ingress-nginx-admission-create-bbp4j ingress-nginx-admission-patch-szllz: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-893295 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-893295 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (262.184459ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:59:48.289903  427313 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:59:48.290256  427313 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:59:48.290269  427313 out.go:374] Setting ErrFile to fd 2...
	I1202 19:59:48.290274  427313 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:59:48.290520  427313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 19:59:48.290855  427313 mustload.go:66] Loading cluster: addons-893295
	I1202 19:59:48.291350  427313 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:59:48.291389  427313 addons.go:622] checking whether the cluster is paused
	I1202 19:59:48.291535  427313 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:59:48.291564  427313 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:59:48.292130  427313 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:59:48.312969  427313 ssh_runner.go:195] Run: systemctl --version
	I1202 19:59:48.313429  427313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:59:48.333900  427313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:59:48.434380  427313 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:59:48.434470  427313 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:59:48.466366  427313 cri.go:89] found id: "02523dcf96b4c0fc67c8513293234b22d238b2e7ec48d1a83e5da1a3f69bdb62"
	I1202 19:59:48.466388  427313 cri.go:89] found id: "72a3a94a8615446f6a8a6edf8cab89a31462a9125890a07caa7b5c08f54ee5d4"
	I1202 19:59:48.466393  427313 cri.go:89] found id: "3fc3b9c2bb5465a31c0448a05bdfa005e3690110089411631dc7f034b6d8ba5f"
	I1202 19:59:48.466397  427313 cri.go:89] found id: "23592b1014e085ea0e5ab3db08387563e82cae3f3801aefb1a36803352f4b32c"
	I1202 19:59:48.466399  427313 cri.go:89] found id: "4873f6a4745b98e6565829135d48f208fc3b8c8fc38349268058cfe66db69ace"
	I1202 19:59:48.466403  427313 cri.go:89] found id: "69202c0144e36fa98f89f3e4dcc0bb6766cd1a5e7765438a217890a210ccc213"
	I1202 19:59:48.466406  427313 cri.go:89] found id: "e272a50ae70cef4e55de4fc5c4b0afb42c240aef2f0e61c0f58d21f32bb4b1b8"
	I1202 19:59:48.466410  427313 cri.go:89] found id: "343bfc0b495bea2a196f645318c6f732f4aac4d10f89f12fe35398625eac34a6"
	I1202 19:59:48.466412  427313 cri.go:89] found id: "c935f2bdad559803c1b224bb424e2d6a8e3f939cc705debca52e51d3b73805cb"
	I1202 19:59:48.466417  427313 cri.go:89] found id: "2021a9af4b97cf9f19cd51daff4057de8ce4a98c1392ab4618729a6e1fdbe890"
	I1202 19:59:48.466421  427313 cri.go:89] found id: "7d3c2329b0b0c2e623e8d3059a441a596800bfcc5ff55d233343c158bb68d997"
	I1202 19:59:48.466423  427313 cri.go:89] found id: "33d9c5ffbca0f707ad94361bf00ebbc97925e1784dd973ef7bd8245741da9b67"
	I1202 19:59:48.466426  427313 cri.go:89] found id: "c59167a3c785bc464e3e63318df704b0084b4a2a24721b883033175b6f4b533f"
	I1202 19:59:48.466429  427313 cri.go:89] found id: "1d0670321bc4abe2d7954d0d6f908cf4e3863170f2e522b0100392c768577198"
	I1202 19:59:48.466432  427313 cri.go:89] found id: "9012f9d6215d108610b3c6096d8b9fd68c47c3b0a9ba15cab4f13cc9e385d4b9"
	I1202 19:59:48.466438  427313 cri.go:89] found id: "457ec4512e89c116a7c5ba880e93b4b91cf5fc694ff53ccf03533d6e1e36de9b"
	I1202 19:59:48.466441  427313 cri.go:89] found id: "91253d86ed19be0b0e1a31e49336ee85f71ca41d7f491fcc1fd6cd2978993ba0"
	I1202 19:59:48.466446  427313 cri.go:89] found id: "1a4586dbac8e8d1828435d72cdf3947bd1869e463e0102cc7b6664ebbeddeacf"
	I1202 19:59:48.466449  427313 cri.go:89] found id: "548b1d008679ffab8ee06c2f360e832860d3a7904cf593b25d31675b7bb892f9"
	I1202 19:59:48.466452  427313 cri.go:89] found id: "92d33e649bb3a4d1fbfddebd2c29786df9d0f9bbe8c4df37c931d3fb4cae82a7"
	I1202 19:59:48.466457  427313 cri.go:89] found id: "36e4834af56306fb62768ad3dbe8f24dbfa293561c0c41bee1c6d418ce06f454"
	I1202 19:59:48.466460  427313 cri.go:89] found id: "1053c12fee90a817e22976e0dc30541fb27c049e02c7c5af353833a13b30e982"
	I1202 19:59:48.466462  427313 cri.go:89] found id: "87e76e15e8595d052d66a4d86ee8b1416a8f60a669646ecbdd55cf8343b8db42"
	I1202 19:59:48.466465  427313 cri.go:89] found id: "54de7a8ca3420358423254cbf3d9a5a5e7140b7f46e22139375e40856000099c"
	I1202 19:59:48.466468  427313 cri.go:89] found id: "64bbafcaa8986f6e93390db3e1aa160fe3cecdd54cdd91e940adc5db87fefb45"
	I1202 19:59:48.466471  427313 cri.go:89] found id: ""
	I1202 19:59:48.466511  427313 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 19:59:48.481480  427313 out.go:203] 
	W1202 19:59:48.482666  427313 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:59:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:59:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 19:59:48.482689  427313 out.go:285] * 
	* 
	W1202 19:59:48.487124  427313 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:59:48.488389  427313 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-893295 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-893295 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-893295 addons disable ingress --alsologtostderr -v=1: exit status 11 (259.389846ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:59:48.552158  427374 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:59:48.552266  427374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:59:48.552274  427374 out.go:374] Setting ErrFile to fd 2...
	I1202 19:59:48.552278  427374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:59:48.552472  427374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 19:59:48.552732  427374 mustload.go:66] Loading cluster: addons-893295
	I1202 19:59:48.553097  427374 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:59:48.553124  427374 addons.go:622] checking whether the cluster is paused
	I1202 19:59:48.553213  427374 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:59:48.553229  427374 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:59:48.553654  427374 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:59:48.572302  427374 ssh_runner.go:195] Run: systemctl --version
	I1202 19:59:48.572374  427374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:59:48.592261  427374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:59:48.693228  427374 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:59:48.693312  427374 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:59:48.725241  427374 cri.go:89] found id: "02523dcf96b4c0fc67c8513293234b22d238b2e7ec48d1a83e5da1a3f69bdb62"
	I1202 19:59:48.725268  427374 cri.go:89] found id: "72a3a94a8615446f6a8a6edf8cab89a31462a9125890a07caa7b5c08f54ee5d4"
	I1202 19:59:48.725274  427374 cri.go:89] found id: "3fc3b9c2bb5465a31c0448a05bdfa005e3690110089411631dc7f034b6d8ba5f"
	I1202 19:59:48.725279  427374 cri.go:89] found id: "23592b1014e085ea0e5ab3db08387563e82cae3f3801aefb1a36803352f4b32c"
	I1202 19:59:48.725284  427374 cri.go:89] found id: "4873f6a4745b98e6565829135d48f208fc3b8c8fc38349268058cfe66db69ace"
	I1202 19:59:48.725289  427374 cri.go:89] found id: "69202c0144e36fa98f89f3e4dcc0bb6766cd1a5e7765438a217890a210ccc213"
	I1202 19:59:48.725294  427374 cri.go:89] found id: "e272a50ae70cef4e55de4fc5c4b0afb42c240aef2f0e61c0f58d21f32bb4b1b8"
	I1202 19:59:48.725298  427374 cri.go:89] found id: "343bfc0b495bea2a196f645318c6f732f4aac4d10f89f12fe35398625eac34a6"
	I1202 19:59:48.725302  427374 cri.go:89] found id: "c935f2bdad559803c1b224bb424e2d6a8e3f939cc705debca52e51d3b73805cb"
	I1202 19:59:48.725308  427374 cri.go:89] found id: "2021a9af4b97cf9f19cd51daff4057de8ce4a98c1392ab4618729a6e1fdbe890"
	I1202 19:59:48.725311  427374 cri.go:89] found id: "7d3c2329b0b0c2e623e8d3059a441a596800bfcc5ff55d233343c158bb68d997"
	I1202 19:59:48.725314  427374 cri.go:89] found id: "33d9c5ffbca0f707ad94361bf00ebbc97925e1784dd973ef7bd8245741da9b67"
	I1202 19:59:48.725316  427374 cri.go:89] found id: "c59167a3c785bc464e3e63318df704b0084b4a2a24721b883033175b6f4b533f"
	I1202 19:59:48.725319  427374 cri.go:89] found id: "1d0670321bc4abe2d7954d0d6f908cf4e3863170f2e522b0100392c768577198"
	I1202 19:59:48.725331  427374 cri.go:89] found id: "9012f9d6215d108610b3c6096d8b9fd68c47c3b0a9ba15cab4f13cc9e385d4b9"
	I1202 19:59:48.725352  427374 cri.go:89] found id: "457ec4512e89c116a7c5ba880e93b4b91cf5fc694ff53ccf03533d6e1e36de9b"
	I1202 19:59:48.725363  427374 cri.go:89] found id: "91253d86ed19be0b0e1a31e49336ee85f71ca41d7f491fcc1fd6cd2978993ba0"
	I1202 19:59:48.725368  427374 cri.go:89] found id: "1a4586dbac8e8d1828435d72cdf3947bd1869e463e0102cc7b6664ebbeddeacf"
	I1202 19:59:48.725372  427374 cri.go:89] found id: "548b1d008679ffab8ee06c2f360e832860d3a7904cf593b25d31675b7bb892f9"
	I1202 19:59:48.725377  427374 cri.go:89] found id: "92d33e649bb3a4d1fbfddebd2c29786df9d0f9bbe8c4df37c931d3fb4cae82a7"
	I1202 19:59:48.725385  427374 cri.go:89] found id: "36e4834af56306fb62768ad3dbe8f24dbfa293561c0c41bee1c6d418ce06f454"
	I1202 19:59:48.725389  427374 cri.go:89] found id: "1053c12fee90a817e22976e0dc30541fb27c049e02c7c5af353833a13b30e982"
	I1202 19:59:48.725393  427374 cri.go:89] found id: "87e76e15e8595d052d66a4d86ee8b1416a8f60a669646ecbdd55cf8343b8db42"
	I1202 19:59:48.725398  427374 cri.go:89] found id: "54de7a8ca3420358423254cbf3d9a5a5e7140b7f46e22139375e40856000099c"
	I1202 19:59:48.725401  427374 cri.go:89] found id: "64bbafcaa8986f6e93390db3e1aa160fe3cecdd54cdd91e940adc5db87fefb45"
	I1202 19:59:48.725404  427374 cri.go:89] found id: ""
	I1202 19:59:48.725452  427374 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 19:59:48.740544  427374 out.go:203] 
	W1202 19:59:48.741527  427374 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:59:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:59:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 19:59:48.741569  427374 out.go:285] * 
	* 
	W1202 19:59:48.745708  427374 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:59:48.747478  427374 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-893295 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (148.72s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.33s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-ps8xn" [83bf3bfe-7c21-4243-a158-bf6b0ee52d3e] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004241197s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-893295 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-893295 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (327.328773ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:57:26.588014  424034 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:57:26.588178  424034 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:57:26.588191  424034 out.go:374] Setting ErrFile to fd 2...
	I1202 19:57:26.588199  424034 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:57:26.588439  424034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 19:57:26.588787  424034 mustload.go:66] Loading cluster: addons-893295
	I1202 19:57:26.589233  424034 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:26.589266  424034 addons.go:622] checking whether the cluster is paused
	I1202 19:57:26.589405  424034 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:26.589434  424034 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:57:26.590053  424034 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:57:26.617767  424034 ssh_runner.go:195] Run: systemctl --version
	I1202 19:57:26.617853  424034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:57:26.643184  424034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:57:26.753996  424034 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:57:26.754242  424034 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:57:26.795634  424034 cri.go:89] found id: "72a3a94a8615446f6a8a6edf8cab89a31462a9125890a07caa7b5c08f54ee5d4"
	I1202 19:57:26.795682  424034 cri.go:89] found id: "3fc3b9c2bb5465a31c0448a05bdfa005e3690110089411631dc7f034b6d8ba5f"
	I1202 19:57:26.795689  424034 cri.go:89] found id: "23592b1014e085ea0e5ab3db08387563e82cae3f3801aefb1a36803352f4b32c"
	I1202 19:57:26.795694  424034 cri.go:89] found id: "4873f6a4745b98e6565829135d48f208fc3b8c8fc38349268058cfe66db69ace"
	I1202 19:57:26.795699  424034 cri.go:89] found id: "69202c0144e36fa98f89f3e4dcc0bb6766cd1a5e7765438a217890a210ccc213"
	I1202 19:57:26.795706  424034 cri.go:89] found id: "e272a50ae70cef4e55de4fc5c4b0afb42c240aef2f0e61c0f58d21f32bb4b1b8"
	I1202 19:57:26.795710  424034 cri.go:89] found id: "343bfc0b495bea2a196f645318c6f732f4aac4d10f89f12fe35398625eac34a6"
	I1202 19:57:26.795715  424034 cri.go:89] found id: "c935f2bdad559803c1b224bb424e2d6a8e3f939cc705debca52e51d3b73805cb"
	I1202 19:57:26.795719  424034 cri.go:89] found id: "2021a9af4b97cf9f19cd51daff4057de8ce4a98c1392ab4618729a6e1fdbe890"
	I1202 19:57:26.795742  424034 cri.go:89] found id: "7d3c2329b0b0c2e623e8d3059a441a596800bfcc5ff55d233343c158bb68d997"
	I1202 19:57:26.795754  424034 cri.go:89] found id: "33d9c5ffbca0f707ad94361bf00ebbc97925e1784dd973ef7bd8245741da9b67"
	I1202 19:57:26.795758  424034 cri.go:89] found id: "c59167a3c785bc464e3e63318df704b0084b4a2a24721b883033175b6f4b533f"
	I1202 19:57:26.795772  424034 cri.go:89] found id: "1d0670321bc4abe2d7954d0d6f908cf4e3863170f2e522b0100392c768577198"
	I1202 19:57:26.795776  424034 cri.go:89] found id: "9012f9d6215d108610b3c6096d8b9fd68c47c3b0a9ba15cab4f13cc9e385d4b9"
	I1202 19:57:26.795780  424034 cri.go:89] found id: "457ec4512e89c116a7c5ba880e93b4b91cf5fc694ff53ccf03533d6e1e36de9b"
	I1202 19:57:26.795801  424034 cri.go:89] found id: "91253d86ed19be0b0e1a31e49336ee85f71ca41d7f491fcc1fd6cd2978993ba0"
	I1202 19:57:26.795809  424034 cri.go:89] found id: "1a4586dbac8e8d1828435d72cdf3947bd1869e463e0102cc7b6664ebbeddeacf"
	I1202 19:57:26.795816  424034 cri.go:89] found id: "548b1d008679ffab8ee06c2f360e832860d3a7904cf593b25d31675b7bb892f9"
	I1202 19:57:26.795820  424034 cri.go:89] found id: "92d33e649bb3a4d1fbfddebd2c29786df9d0f9bbe8c4df37c931d3fb4cae82a7"
	I1202 19:57:26.795824  424034 cri.go:89] found id: "36e4834af56306fb62768ad3dbe8f24dbfa293561c0c41bee1c6d418ce06f454"
	I1202 19:57:26.795833  424034 cri.go:89] found id: "1053c12fee90a817e22976e0dc30541fb27c049e02c7c5af353833a13b30e982"
	I1202 19:57:26.795837  424034 cri.go:89] found id: "87e76e15e8595d052d66a4d86ee8b1416a8f60a669646ecbdd55cf8343b8db42"
	I1202 19:57:26.795841  424034 cri.go:89] found id: "54de7a8ca3420358423254cbf3d9a5a5e7140b7f46e22139375e40856000099c"
	I1202 19:57:26.795846  424034 cri.go:89] found id: "64bbafcaa8986f6e93390db3e1aa160fe3cecdd54cdd91e940adc5db87fefb45"
	I1202 19:57:26.795850  424034 cri.go:89] found id: ""
	I1202 19:57:26.795914  424034 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 19:57:26.814631  424034 out.go:203] 
	W1202 19:57:26.816141  424034 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:57:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:57:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 19:57:26.816173  424034 out.go:285] * 
	* 
	W1202 19:57:26.822588  424034 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:57:26.824166  424034 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-893295 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.33s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.338686ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-fbhzv" [51840c60-3fa9-4717-85ec-69d3082c6537] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002838129s
addons_test.go:463: (dbg) Run:  kubectl --context addons-893295 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-893295 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-893295 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (267.165607ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:57:10.743448  421904 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:57:10.743621  421904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:57:10.743633  421904 out.go:374] Setting ErrFile to fd 2...
	I1202 19:57:10.743638  421904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:57:10.743966  421904 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 19:57:10.744301  421904 mustload.go:66] Loading cluster: addons-893295
	I1202 19:57:10.744629  421904 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:10.744648  421904 addons.go:622] checking whether the cluster is paused
	I1202 19:57:10.744727  421904 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:10.744744  421904 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:57:10.745150  421904 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:57:10.763451  421904 ssh_runner.go:195] Run: systemctl --version
	I1202 19:57:10.763500  421904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:57:10.782732  421904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:57:10.882453  421904 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:57:10.882568  421904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:57:10.915828  421904 cri.go:89] found id: "72a3a94a8615446f6a8a6edf8cab89a31462a9125890a07caa7b5c08f54ee5d4"
	I1202 19:57:10.915852  421904 cri.go:89] found id: "3fc3b9c2bb5465a31c0448a05bdfa005e3690110089411631dc7f034b6d8ba5f"
	I1202 19:57:10.915857  421904 cri.go:89] found id: "23592b1014e085ea0e5ab3db08387563e82cae3f3801aefb1a36803352f4b32c"
	I1202 19:57:10.915862  421904 cri.go:89] found id: "4873f6a4745b98e6565829135d48f208fc3b8c8fc38349268058cfe66db69ace"
	I1202 19:57:10.915866  421904 cri.go:89] found id: "69202c0144e36fa98f89f3e4dcc0bb6766cd1a5e7765438a217890a210ccc213"
	I1202 19:57:10.915871  421904 cri.go:89] found id: "e272a50ae70cef4e55de4fc5c4b0afb42c240aef2f0e61c0f58d21f32bb4b1b8"
	I1202 19:57:10.915875  421904 cri.go:89] found id: "343bfc0b495bea2a196f645318c6f732f4aac4d10f89f12fe35398625eac34a6"
	I1202 19:57:10.915880  421904 cri.go:89] found id: "c935f2bdad559803c1b224bb424e2d6a8e3f939cc705debca52e51d3b73805cb"
	I1202 19:57:10.915884  421904 cri.go:89] found id: "2021a9af4b97cf9f19cd51daff4057de8ce4a98c1392ab4618729a6e1fdbe890"
	I1202 19:57:10.915899  421904 cri.go:89] found id: "7d3c2329b0b0c2e623e8d3059a441a596800bfcc5ff55d233343c158bb68d997"
	I1202 19:57:10.915908  421904 cri.go:89] found id: "33d9c5ffbca0f707ad94361bf00ebbc97925e1784dd973ef7bd8245741da9b67"
	I1202 19:57:10.915914  421904 cri.go:89] found id: "c59167a3c785bc464e3e63318df704b0084b4a2a24721b883033175b6f4b533f"
	I1202 19:57:10.915922  421904 cri.go:89] found id: "1d0670321bc4abe2d7954d0d6f908cf4e3863170f2e522b0100392c768577198"
	I1202 19:57:10.915927  421904 cri.go:89] found id: "9012f9d6215d108610b3c6096d8b9fd68c47c3b0a9ba15cab4f13cc9e385d4b9"
	I1202 19:57:10.915935  421904 cri.go:89] found id: "457ec4512e89c116a7c5ba880e93b4b91cf5fc694ff53ccf03533d6e1e36de9b"
	I1202 19:57:10.915952  421904 cri.go:89] found id: "91253d86ed19be0b0e1a31e49336ee85f71ca41d7f491fcc1fd6cd2978993ba0"
	I1202 19:57:10.915963  421904 cri.go:89] found id: "1a4586dbac8e8d1828435d72cdf3947bd1869e463e0102cc7b6664ebbeddeacf"
	I1202 19:57:10.915989  421904 cri.go:89] found id: "548b1d008679ffab8ee06c2f360e832860d3a7904cf593b25d31675b7bb892f9"
	I1202 19:57:10.915996  421904 cri.go:89] found id: "92d33e649bb3a4d1fbfddebd2c29786df9d0f9bbe8c4df37c931d3fb4cae82a7"
	I1202 19:57:10.916002  421904 cri.go:89] found id: "36e4834af56306fb62768ad3dbe8f24dbfa293561c0c41bee1c6d418ce06f454"
	I1202 19:57:10.916011  421904 cri.go:89] found id: "1053c12fee90a817e22976e0dc30541fb27c049e02c7c5af353833a13b30e982"
	I1202 19:57:10.916019  421904 cri.go:89] found id: "87e76e15e8595d052d66a4d86ee8b1416a8f60a669646ecbdd55cf8343b8db42"
	I1202 19:57:10.916025  421904 cri.go:89] found id: "54de7a8ca3420358423254cbf3d9a5a5e7140b7f46e22139375e40856000099c"
	I1202 19:57:10.916032  421904 cri.go:89] found id: "64bbafcaa8986f6e93390db3e1aa160fe3cecdd54cdd91e940adc5db87fefb45"
	I1202 19:57:10.916038  421904 cri.go:89] found id: ""
	I1202 19:57:10.916112  421904 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 19:57:10.932325  421904 out.go:203] 
	W1202 19:57:10.933606  421904 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:57:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:57:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 19:57:10.933633  421904 out.go:285] * 
	* 
	W1202 19:57:10.939027  421904 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:57:10.942397  421904 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-893295 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.34s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.33s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1202 19:57:08.287440  411032 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1202 19:57:08.291306  411032 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1202 19:57:08.291339  411032 kapi.go:107] duration metric: took 3.928443ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.942431ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-893295 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-893295 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [86ab7c80-0e20-4224-a726-b118d4563869] Pending
helpers_test.go:352: "task-pv-pod" [86ab7c80-0e20-4224-a726-b118d4563869] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [86ab7c80-0e20-4224-a726-b118d4563869] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.00390243s
addons_test.go:572: (dbg) Run:  kubectl --context addons-893295 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-893295 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-893295 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-893295 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-893295 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-893295 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-893295 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [ec77a64b-cf94-4e6d-b3b6-eac74cec8b54] Pending
helpers_test.go:352: "task-pv-pod-restore" [ec77a64b-cf94-4e6d-b3b6-eac74cec8b54] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [ec77a64b-cf94-4e6d-b3b6-eac74cec8b54] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004347301s
addons_test.go:614: (dbg) Run:  kubectl --context addons-893295 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-893295 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-893295 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-893295 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-893295 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (262.036092ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:58:01.148583  425082 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:58:01.148914  425082 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:58:01.148925  425082 out.go:374] Setting ErrFile to fd 2...
	I1202 19:58:01.148930  425082 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:58:01.149144  425082 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 19:58:01.149442  425082 mustload.go:66] Loading cluster: addons-893295
	I1202 19:58:01.149786  425082 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:58:01.149807  425082 addons.go:622] checking whether the cluster is paused
	I1202 19:58:01.149885  425082 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:58:01.149908  425082 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:58:01.150299  425082 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:58:01.169989  425082 ssh_runner.go:195] Run: systemctl --version
	I1202 19:58:01.170063  425082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:58:01.189513  425082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:58:01.290412  425082 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:58:01.290531  425082 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:58:01.322269  425082 cri.go:89] found id: "72a3a94a8615446f6a8a6edf8cab89a31462a9125890a07caa7b5c08f54ee5d4"
	I1202 19:58:01.322294  425082 cri.go:89] found id: "3fc3b9c2bb5465a31c0448a05bdfa005e3690110089411631dc7f034b6d8ba5f"
	I1202 19:58:01.322301  425082 cri.go:89] found id: "23592b1014e085ea0e5ab3db08387563e82cae3f3801aefb1a36803352f4b32c"
	I1202 19:58:01.322305  425082 cri.go:89] found id: "4873f6a4745b98e6565829135d48f208fc3b8c8fc38349268058cfe66db69ace"
	I1202 19:58:01.322309  425082 cri.go:89] found id: "69202c0144e36fa98f89f3e4dcc0bb6766cd1a5e7765438a217890a210ccc213"
	I1202 19:58:01.322314  425082 cri.go:89] found id: "e272a50ae70cef4e55de4fc5c4b0afb42c240aef2f0e61c0f58d21f32bb4b1b8"
	I1202 19:58:01.322318  425082 cri.go:89] found id: "343bfc0b495bea2a196f645318c6f732f4aac4d10f89f12fe35398625eac34a6"
	I1202 19:58:01.322322  425082 cri.go:89] found id: "c935f2bdad559803c1b224bb424e2d6a8e3f939cc705debca52e51d3b73805cb"
	I1202 19:58:01.322326  425082 cri.go:89] found id: "2021a9af4b97cf9f19cd51daff4057de8ce4a98c1392ab4618729a6e1fdbe890"
	I1202 19:58:01.322333  425082 cri.go:89] found id: "7d3c2329b0b0c2e623e8d3059a441a596800bfcc5ff55d233343c158bb68d997"
	I1202 19:58:01.322338  425082 cri.go:89] found id: "33d9c5ffbca0f707ad94361bf00ebbc97925e1784dd973ef7bd8245741da9b67"
	I1202 19:58:01.322342  425082 cri.go:89] found id: "c59167a3c785bc464e3e63318df704b0084b4a2a24721b883033175b6f4b533f"
	I1202 19:58:01.322349  425082 cri.go:89] found id: "1d0670321bc4abe2d7954d0d6f908cf4e3863170f2e522b0100392c768577198"
	I1202 19:58:01.322353  425082 cri.go:89] found id: "9012f9d6215d108610b3c6096d8b9fd68c47c3b0a9ba15cab4f13cc9e385d4b9"
	I1202 19:58:01.322358  425082 cri.go:89] found id: "457ec4512e89c116a7c5ba880e93b4b91cf5fc694ff53ccf03533d6e1e36de9b"
	I1202 19:58:01.322370  425082 cri.go:89] found id: "91253d86ed19be0b0e1a31e49336ee85f71ca41d7f491fcc1fd6cd2978993ba0"
	I1202 19:58:01.322379  425082 cri.go:89] found id: "1a4586dbac8e8d1828435d72cdf3947bd1869e463e0102cc7b6664ebbeddeacf"
	I1202 19:58:01.322386  425082 cri.go:89] found id: "548b1d008679ffab8ee06c2f360e832860d3a7904cf593b25d31675b7bb892f9"
	I1202 19:58:01.322391  425082 cri.go:89] found id: "92d33e649bb3a4d1fbfddebd2c29786df9d0f9bbe8c4df37c931d3fb4cae82a7"
	I1202 19:58:01.322395  425082 cri.go:89] found id: "36e4834af56306fb62768ad3dbe8f24dbfa293561c0c41bee1c6d418ce06f454"
	I1202 19:58:01.322400  425082 cri.go:89] found id: "1053c12fee90a817e22976e0dc30541fb27c049e02c7c5af353833a13b30e982"
	I1202 19:58:01.322408  425082 cri.go:89] found id: "87e76e15e8595d052d66a4d86ee8b1416a8f60a669646ecbdd55cf8343b8db42"
	I1202 19:58:01.322414  425082 cri.go:89] found id: "54de7a8ca3420358423254cbf3d9a5a5e7140b7f46e22139375e40856000099c"
	I1202 19:58:01.322421  425082 cri.go:89] found id: "64bbafcaa8986f6e93390db3e1aa160fe3cecdd54cdd91e940adc5db87fefb45"
	I1202 19:58:01.322427  425082 cri.go:89] found id: ""
	I1202 19:58:01.322492  425082 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 19:58:01.338257  425082 out.go:203] 
	W1202 19:58:01.339501  425082 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:58:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:58:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 19:58:01.339530  425082 out.go:285] * 
	* 
	W1202 19:58:01.343665  425082 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:58:01.345012  425082 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-893295 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-893295 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-893295 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (268.300426ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:58:01.411735  425145 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:58:01.412421  425145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:58:01.412439  425145 out.go:374] Setting ErrFile to fd 2...
	I1202 19:58:01.412447  425145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:58:01.412896  425145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 19:58:01.413394  425145 mustload.go:66] Loading cluster: addons-893295
	I1202 19:58:01.414131  425145 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:58:01.414169  425145 addons.go:622] checking whether the cluster is paused
	I1202 19:58:01.414296  425145 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:58:01.414335  425145 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:58:01.414766  425145 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:58:01.435572  425145 ssh_runner.go:195] Run: systemctl --version
	I1202 19:58:01.435643  425145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:58:01.456856  425145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:58:01.557683  425145 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:58:01.557783  425145 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:58:01.591649  425145 cri.go:89] found id: "72a3a94a8615446f6a8a6edf8cab89a31462a9125890a07caa7b5c08f54ee5d4"
	I1202 19:58:01.591675  425145 cri.go:89] found id: "3fc3b9c2bb5465a31c0448a05bdfa005e3690110089411631dc7f034b6d8ba5f"
	I1202 19:58:01.591681  425145 cri.go:89] found id: "23592b1014e085ea0e5ab3db08387563e82cae3f3801aefb1a36803352f4b32c"
	I1202 19:58:01.591686  425145 cri.go:89] found id: "4873f6a4745b98e6565829135d48f208fc3b8c8fc38349268058cfe66db69ace"
	I1202 19:58:01.591691  425145 cri.go:89] found id: "69202c0144e36fa98f89f3e4dcc0bb6766cd1a5e7765438a217890a210ccc213"
	I1202 19:58:01.591695  425145 cri.go:89] found id: "e272a50ae70cef4e55de4fc5c4b0afb42c240aef2f0e61c0f58d21f32bb4b1b8"
	I1202 19:58:01.591699  425145 cri.go:89] found id: "343bfc0b495bea2a196f645318c6f732f4aac4d10f89f12fe35398625eac34a6"
	I1202 19:58:01.591703  425145 cri.go:89] found id: "c935f2bdad559803c1b224bb424e2d6a8e3f939cc705debca52e51d3b73805cb"
	I1202 19:58:01.591708  425145 cri.go:89] found id: "2021a9af4b97cf9f19cd51daff4057de8ce4a98c1392ab4618729a6e1fdbe890"
	I1202 19:58:01.591717  425145 cri.go:89] found id: "7d3c2329b0b0c2e623e8d3059a441a596800bfcc5ff55d233343c158bb68d997"
	I1202 19:58:01.591721  425145 cri.go:89] found id: "33d9c5ffbca0f707ad94361bf00ebbc97925e1784dd973ef7bd8245741da9b67"
	I1202 19:58:01.591725  425145 cri.go:89] found id: "c59167a3c785bc464e3e63318df704b0084b4a2a24721b883033175b6f4b533f"
	I1202 19:58:01.591730  425145 cri.go:89] found id: "1d0670321bc4abe2d7954d0d6f908cf4e3863170f2e522b0100392c768577198"
	I1202 19:58:01.591735  425145 cri.go:89] found id: "9012f9d6215d108610b3c6096d8b9fd68c47c3b0a9ba15cab4f13cc9e385d4b9"
	I1202 19:58:01.591740  425145 cri.go:89] found id: "457ec4512e89c116a7c5ba880e93b4b91cf5fc694ff53ccf03533d6e1e36de9b"
	I1202 19:58:01.591751  425145 cri.go:89] found id: "91253d86ed19be0b0e1a31e49336ee85f71ca41d7f491fcc1fd6cd2978993ba0"
	I1202 19:58:01.591759  425145 cri.go:89] found id: "1a4586dbac8e8d1828435d72cdf3947bd1869e463e0102cc7b6664ebbeddeacf"
	I1202 19:58:01.591766  425145 cri.go:89] found id: "548b1d008679ffab8ee06c2f360e832860d3a7904cf593b25d31675b7bb892f9"
	I1202 19:58:01.591771  425145 cri.go:89] found id: "92d33e649bb3a4d1fbfddebd2c29786df9d0f9bbe8c4df37c931d3fb4cae82a7"
	I1202 19:58:01.591775  425145 cri.go:89] found id: "36e4834af56306fb62768ad3dbe8f24dbfa293561c0c41bee1c6d418ce06f454"
	I1202 19:58:01.591779  425145 cri.go:89] found id: "1053c12fee90a817e22976e0dc30541fb27c049e02c7c5af353833a13b30e982"
	I1202 19:58:01.591784  425145 cri.go:89] found id: "87e76e15e8595d052d66a4d86ee8b1416a8f60a669646ecbdd55cf8343b8db42"
	I1202 19:58:01.591793  425145 cri.go:89] found id: "54de7a8ca3420358423254cbf3d9a5a5e7140b7f46e22139375e40856000099c"
	I1202 19:58:01.591797  425145 cri.go:89] found id: "64bbafcaa8986f6e93390db3e1aa160fe3cecdd54cdd91e940adc5db87fefb45"
	I1202 19:58:01.591801  425145 cri.go:89] found id: ""
	I1202 19:58:01.591856  425145 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 19:58:01.607291  425145 out.go:203] 
	W1202 19:58:01.608849  425145 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:58:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:58:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 19:58:01.608879  425145 out.go:285] * 
	* 
	W1202 19:58:01.613009  425145 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:58:01.614326  425145 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-893295 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (53.33s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-893295 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-893295 --alsologtostderr -v=1: exit status 11 (264.989993ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:57:05.674840  421027 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:57:05.675164  421027 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:57:05.675177  421027 out.go:374] Setting ErrFile to fd 2...
	I1202 19:57:05.675181  421027 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:57:05.675406  421027 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 19:57:05.675709  421027 mustload.go:66] Loading cluster: addons-893295
	I1202 19:57:05.676213  421027 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:05.676246  421027 addons.go:622] checking whether the cluster is paused
	I1202 19:57:05.676378  421027 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:05.676403  421027 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:57:05.676989  421027 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:57:05.696129  421027 ssh_runner.go:195] Run: systemctl --version
	I1202 19:57:05.696205  421027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:57:05.714978  421027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:57:05.815166  421027 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:57:05.815289  421027 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:57:05.847524  421027 cri.go:89] found id: "72a3a94a8615446f6a8a6edf8cab89a31462a9125890a07caa7b5c08f54ee5d4"
	I1202 19:57:05.847549  421027 cri.go:89] found id: "3fc3b9c2bb5465a31c0448a05bdfa005e3690110089411631dc7f034b6d8ba5f"
	I1202 19:57:05.847555  421027 cri.go:89] found id: "23592b1014e085ea0e5ab3db08387563e82cae3f3801aefb1a36803352f4b32c"
	I1202 19:57:05.847559  421027 cri.go:89] found id: "4873f6a4745b98e6565829135d48f208fc3b8c8fc38349268058cfe66db69ace"
	I1202 19:57:05.847563  421027 cri.go:89] found id: "69202c0144e36fa98f89f3e4dcc0bb6766cd1a5e7765438a217890a210ccc213"
	I1202 19:57:05.847568  421027 cri.go:89] found id: "e272a50ae70cef4e55de4fc5c4b0afb42c240aef2f0e61c0f58d21f32bb4b1b8"
	I1202 19:57:05.847584  421027 cri.go:89] found id: "343bfc0b495bea2a196f645318c6f732f4aac4d10f89f12fe35398625eac34a6"
	I1202 19:57:05.847588  421027 cri.go:89] found id: "c935f2bdad559803c1b224bb424e2d6a8e3f939cc705debca52e51d3b73805cb"
	I1202 19:57:05.847592  421027 cri.go:89] found id: "2021a9af4b97cf9f19cd51daff4057de8ce4a98c1392ab4618729a6e1fdbe890"
	I1202 19:57:05.847609  421027 cri.go:89] found id: "7d3c2329b0b0c2e623e8d3059a441a596800bfcc5ff55d233343c158bb68d997"
	I1202 19:57:05.847618  421027 cri.go:89] found id: "33d9c5ffbca0f707ad94361bf00ebbc97925e1784dd973ef7bd8245741da9b67"
	I1202 19:57:05.847624  421027 cri.go:89] found id: "c59167a3c785bc464e3e63318df704b0084b4a2a24721b883033175b6f4b533f"
	I1202 19:57:05.847629  421027 cri.go:89] found id: "1d0670321bc4abe2d7954d0d6f908cf4e3863170f2e522b0100392c768577198"
	I1202 19:57:05.847637  421027 cri.go:89] found id: "9012f9d6215d108610b3c6096d8b9fd68c47c3b0a9ba15cab4f13cc9e385d4b9"
	I1202 19:57:05.847642  421027 cri.go:89] found id: "457ec4512e89c116a7c5ba880e93b4b91cf5fc694ff53ccf03533d6e1e36de9b"
	I1202 19:57:05.847658  421027 cri.go:89] found id: "91253d86ed19be0b0e1a31e49336ee85f71ca41d7f491fcc1fd6cd2978993ba0"
	I1202 19:57:05.847667  421027 cri.go:89] found id: "1a4586dbac8e8d1828435d72cdf3947bd1869e463e0102cc7b6664ebbeddeacf"
	I1202 19:57:05.847672  421027 cri.go:89] found id: "548b1d008679ffab8ee06c2f360e832860d3a7904cf593b25d31675b7bb892f9"
	I1202 19:57:05.847677  421027 cri.go:89] found id: "92d33e649bb3a4d1fbfddebd2c29786df9d0f9bbe8c4df37c931d3fb4cae82a7"
	I1202 19:57:05.847681  421027 cri.go:89] found id: "36e4834af56306fb62768ad3dbe8f24dbfa293561c0c41bee1c6d418ce06f454"
	I1202 19:57:05.847685  421027 cri.go:89] found id: "1053c12fee90a817e22976e0dc30541fb27c049e02c7c5af353833a13b30e982"
	I1202 19:57:05.847689  421027 cri.go:89] found id: "87e76e15e8595d052d66a4d86ee8b1416a8f60a669646ecbdd55cf8343b8db42"
	I1202 19:57:05.847693  421027 cri.go:89] found id: "54de7a8ca3420358423254cbf3d9a5a5e7140b7f46e22139375e40856000099c"
	I1202 19:57:05.847697  421027 cri.go:89] found id: "64bbafcaa8986f6e93390db3e1aa160fe3cecdd54cdd91e940adc5db87fefb45"
	I1202 19:57:05.847701  421027 cri.go:89] found id: ""
	I1202 19:57:05.847747  421027 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 19:57:05.862286  421027 out.go:203] 
	W1202 19:57:05.863246  421027 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:57:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:57:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 19:57:05.863264  421027 out.go:285] * 
	* 
	W1202 19:57:05.867387  421027 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:57:05.868640  421027 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-893295 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-893295
helpers_test.go:243: (dbg) docker inspect addons-893295:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fb1b06b464b8e1cc0be3d869922ad319eca24c0f73d9dd3623150e70a87dad64",
	        "Created": "2025-12-02T19:55:02.532086274Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 413487,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T19:55:02.571981154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/fb1b06b464b8e1cc0be3d869922ad319eca24c0f73d9dd3623150e70a87dad64/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fb1b06b464b8e1cc0be3d869922ad319eca24c0f73d9dd3623150e70a87dad64/hostname",
	        "HostsPath": "/var/lib/docker/containers/fb1b06b464b8e1cc0be3d869922ad319eca24c0f73d9dd3623150e70a87dad64/hosts",
	        "LogPath": "/var/lib/docker/containers/fb1b06b464b8e1cc0be3d869922ad319eca24c0f73d9dd3623150e70a87dad64/fb1b06b464b8e1cc0be3d869922ad319eca24c0f73d9dd3623150e70a87dad64-json.log",
	        "Name": "/addons-893295",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-893295:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-893295",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fb1b06b464b8e1cc0be3d869922ad319eca24c0f73d9dd3623150e70a87dad64",
	                "LowerDir": "/var/lib/docker/overlay2/51fe9b3afe0210445cec2e2cd1c061e3ff5977b7927ed6e339e2f8b682072296-init/diff:/var/lib/docker/overlay2/49bb44e987885c48600d5ae7ad3c81cad82a00f38070c0460882c5746d9fae59/diff",
	                "MergedDir": "/var/lib/docker/overlay2/51fe9b3afe0210445cec2e2cd1c061e3ff5977b7927ed6e339e2f8b682072296/merged",
	                "UpperDir": "/var/lib/docker/overlay2/51fe9b3afe0210445cec2e2cd1c061e3ff5977b7927ed6e339e2f8b682072296/diff",
	                "WorkDir": "/var/lib/docker/overlay2/51fe9b3afe0210445cec2e2cd1c061e3ff5977b7927ed6e339e2f8b682072296/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-893295",
	                "Source": "/var/lib/docker/volumes/addons-893295/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-893295",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-893295",
	                "name.minikube.sigs.k8s.io": "addons-893295",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "4ff1f263f2ab23bcf25eef62bdfec9099c29759ef04c86831ba29bad921bbe62",
	            "SandboxKey": "/var/run/docker/netns/4ff1f263f2ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-893295": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "15be3e52e05572810bbe7de119c632146eb5eadd30ee490522b569aa947428b3",
	                    "EndpointID": "a16df17d6d0f82d89ddd5d38762572af57ae1a1fc7c6e0cce2a6ec038b7dfc3a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "da:b8:cf:f3:41:f2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-893295",
	                        "fb1b06b464b8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-893295 -n addons-893295
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-893295 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-893295 logs -n 25: (1.196903249s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-243407 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-243407   │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │ 02 Dec 25 19:54 UTC │
	│ delete  │ -p download-only-243407                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-243407   │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │ 02 Dec 25 19:54 UTC │
	│ start   │ -o=json --download-only -p download-only-278754 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-278754   │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │ 02 Dec 25 19:54 UTC │
	│ delete  │ -p download-only-278754                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-278754   │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │ 02 Dec 25 19:54 UTC │
	│ start   │ -o=json --download-only -p download-only-993370 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-993370   │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │ 02 Dec 25 19:54 UTC │
	│ delete  │ -p download-only-993370                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-993370   │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │ 02 Dec 25 19:54 UTC │
	│ delete  │ -p download-only-243407                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-243407   │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │ 02 Dec 25 19:54 UTC │
	│ delete  │ -p download-only-278754                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-278754   │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │ 02 Dec 25 19:54 UTC │
	│ delete  │ -p download-only-993370                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-993370   │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │ 02 Dec 25 19:54 UTC │
	│ start   │ --download-only -p download-docker-261487 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-261487 │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │                     │
	│ delete  │ -p download-docker-261487                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-261487 │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │ 02 Dec 25 19:54 UTC │
	│ start   │ --download-only -p binary-mirror-599276 --alsologtostderr --binary-mirror http://127.0.0.1:35789 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-599276   │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │                     │
	│ delete  │ -p binary-mirror-599276                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-599276   │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │ 02 Dec 25 19:54 UTC │
	│ addons  │ disable dashboard -p addons-893295                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-893295          │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │                     │
	│ addons  │ enable dashboard -p addons-893295                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-893295          │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │                     │
	│ start   │ -p addons-893295 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-893295          │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │ 02 Dec 25 19:56 UTC │
	│ addons  │ addons-893295 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-893295          │ jenkins │ v1.37.0 │ 02 Dec 25 19:56 UTC │                     │
	│ addons  │ addons-893295 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-893295          │ jenkins │ v1.37.0 │ 02 Dec 25 19:57 UTC │                     │
	│ addons  │ enable headlamp -p addons-893295 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-893295          │ jenkins │ v1.37.0 │ 02 Dec 25 19:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:54:41
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:54:41.754876  412831 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:54:41.755164  412831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:54:41.755175  412831 out.go:374] Setting ErrFile to fd 2...
	I1202 19:54:41.755180  412831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:54:41.755413  412831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 19:54:41.756120  412831 out.go:368] Setting JSON to false
	I1202 19:54:41.757095  412831 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5826,"bootTime":1764699456,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 19:54:41.757156  412831 start.go:143] virtualization: kvm guest
	I1202 19:54:41.759099  412831 out.go:179] * [addons-893295] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 19:54:41.760375  412831 notify.go:221] Checking for updates...
	I1202 19:54:41.760393  412831 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 19:54:41.761678  412831 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:54:41.763325  412831 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 19:54:41.764668  412831 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 19:54:41.765832  412831 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 19:54:41.766922  412831 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:54:41.768321  412831 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:54:41.792860  412831 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 19:54:41.793026  412831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:54:41.854417  412831 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-02 19:54:41.843751139 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 19:54:41.854523  412831 docker.go:319] overlay module found
	I1202 19:54:41.856369  412831 out.go:179] * Using the docker driver based on user configuration
	I1202 19:54:41.857423  412831 start.go:309] selected driver: docker
	I1202 19:54:41.857444  412831 start.go:927] validating driver "docker" against <nil>
	I1202 19:54:41.857459  412831 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:54:41.858082  412831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:54:41.917656  412831 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-02 19:54:41.907921772 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 19:54:41.917867  412831 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 19:54:41.918131  412831 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:54:41.919739  412831 out.go:179] * Using Docker driver with root privileges
	I1202 19:54:41.920795  412831 cni.go:84] Creating CNI manager for ""
	I1202 19:54:41.920867  412831 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:54:41.920879  412831 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 19:54:41.920990  412831 start.go:353] cluster config:
	{Name:addons-893295 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-893295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1202 19:54:41.922295  412831 out.go:179] * Starting "addons-893295" primary control-plane node in "addons-893295" cluster
	I1202 19:54:41.923633  412831 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:54:41.924802  412831 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:54:41.925799  412831 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:54:41.925843  412831 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 19:54:41.925854  412831 cache.go:65] Caching tarball of preloaded images
	I1202 19:54:41.925904  412831 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:54:41.925966  412831 preload.go:238] Found /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 19:54:41.925980  412831 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:54:41.926350  412831 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/config.json ...
	I1202 19:54:41.926386  412831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/config.json: {Name:mk60be7980c08c9778afd7456fa6ca920b75e519 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:54:41.943426  412831 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 19:54:41.943564  412831 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1202 19:54:41.943582  412831 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory, skipping pull
	I1202 19:54:41.943587  412831 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in cache, skipping pull
	I1202 19:54:41.943594  412831 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	I1202 19:54:41.943598  412831 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from local cache
	I1202 19:54:54.733307  412831 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from cached tarball
	I1202 19:54:54.733352  412831 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:54:54.733415  412831 start.go:360] acquireMachinesLock for addons-893295: {Name:mk42cd6f39fb536484d21dc2475baeee68e879a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:54:54.733551  412831 start.go:364] duration metric: took 108.678µs to acquireMachinesLock for "addons-893295"
	I1202 19:54:54.733590  412831 start.go:93] Provisioning new machine with config: &{Name:addons-893295 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-893295 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:54:54.733666  412831 start.go:125] createHost starting for "" (driver="docker")
	I1202 19:54:54.735565  412831 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1202 19:54:54.735813  412831 start.go:159] libmachine.API.Create for "addons-893295" (driver="docker")
	I1202 19:54:54.735854  412831 client.go:173] LocalClient.Create starting
	I1202 19:54:54.736000  412831 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem
	I1202 19:54:54.847027  412831 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem
	I1202 19:54:54.932568  412831 cli_runner.go:164] Run: docker network inspect addons-893295 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 19:54:54.949980  412831 cli_runner.go:211] docker network inspect addons-893295 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 19:54:54.950097  412831 network_create.go:284] running [docker network inspect addons-893295] to gather additional debugging logs...
	I1202 19:54:54.950123  412831 cli_runner.go:164] Run: docker network inspect addons-893295
	W1202 19:54:54.967909  412831 cli_runner.go:211] docker network inspect addons-893295 returned with exit code 1
	I1202 19:54:54.967947  412831 network_create.go:287] error running [docker network inspect addons-893295]: docker network inspect addons-893295: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-893295 not found
	I1202 19:54:54.967967  412831 network_create.go:289] output of [docker network inspect addons-893295]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-893295 not found
	
	** /stderr **
	I1202 19:54:54.968130  412831 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:54:54.986008  412831 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f4e9c0}
	I1202 19:54:54.986055  412831 network_create.go:124] attempt to create docker network addons-893295 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1202 19:54:54.986125  412831 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-893295 addons-893295
	I1202 19:54:55.034573  412831 network_create.go:108] docker network addons-893295 192.168.49.0/24 created
	I1202 19:54:55.034613  412831 kic.go:121] calculated static IP "192.168.49.2" for the "addons-893295" container
	I1202 19:54:55.034677  412831 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 19:54:55.052174  412831 cli_runner.go:164] Run: docker volume create addons-893295 --label name.minikube.sigs.k8s.io=addons-893295 --label created_by.minikube.sigs.k8s.io=true
	I1202 19:54:55.071162  412831 oci.go:103] Successfully created a docker volume addons-893295
	I1202 19:54:55.071268  412831 cli_runner.go:164] Run: docker run --rm --name addons-893295-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-893295 --entrypoint /usr/bin/test -v addons-893295:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 19:54:58.586037  412831 cli_runner.go:217] Completed: docker run --rm --name addons-893295-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-893295 --entrypoint /usr/bin/test -v addons-893295:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (3.514704843s)
	I1202 19:54:58.586091  412831 oci.go:107] Successfully prepared a docker volume addons-893295
	I1202 19:54:58.586187  412831 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:54:58.586205  412831 kic.go:194] Starting extracting preloaded images to volume ...
	I1202 19:54:58.586275  412831 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-893295:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1202 19:55:02.456234  412831 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-893295:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (3.869896147s)
	I1202 19:55:02.456275  412831 kic.go:203] duration metric: took 3.870066852s to extract preloaded images to volume ...
	W1202 19:55:02.456383  412831 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1202 19:55:02.456421  412831 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1202 19:55:02.456466  412831 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 19:55:02.515529  412831 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-893295 --name addons-893295 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-893295 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-893295 --network addons-893295 --ip 192.168.49.2 --volume addons-893295:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 19:55:02.791780  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Running}}
	I1202 19:55:02.811308  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:02.830838  412831 cli_runner.go:164] Run: docker exec addons-893295 stat /var/lib/dpkg/alternatives/iptables
	I1202 19:55:02.879029  412831 oci.go:144] the created container "addons-893295" has a running status.
	I1202 19:55:02.879062  412831 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa...
	I1202 19:55:02.998487  412831 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 19:55:03.026337  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:03.050478  412831 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 19:55:03.050512  412831 kic_runner.go:114] Args: [docker exec --privileged addons-893295 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 19:55:03.092774  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:03.116093  412831 machine.go:94] provisionDockerMachine start ...
	I1202 19:55:03.117453  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:03.143504  412831 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:03.143861  412831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 19:55:03.143879  412831 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:55:03.144561  412831 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45870->127.0.0.1:33148: read: connection reset by peer
	I1202 19:55:06.288387  412831 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-893295
	
	I1202 19:55:06.288423  412831 ubuntu.go:182] provisioning hostname "addons-893295"
	I1202 19:55:06.288493  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:06.307948  412831 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:06.308238  412831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 19:55:06.308254  412831 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-893295 && echo "addons-893295" | sudo tee /etc/hostname
	I1202 19:55:06.459279  412831 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-893295
	
	I1202 19:55:06.459398  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:06.479773  412831 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:06.480016  412831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 19:55:06.480036  412831 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-893295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-893295/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-893295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:55:06.621432  412831 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:55:06.621470  412831 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-407427/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-407427/.minikube}
	I1202 19:55:06.621539  412831 ubuntu.go:190] setting up certificates
	I1202 19:55:06.621560  412831 provision.go:84] configureAuth start
	I1202 19:55:06.621638  412831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-893295
	I1202 19:55:06.640269  412831 provision.go:143] copyHostCerts
	I1202 19:55:06.640365  412831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem (1082 bytes)
	I1202 19:55:06.640500  412831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem (1123 bytes)
	I1202 19:55:06.640579  412831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem (1675 bytes)
	I1202 19:55:06.640645  412831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem org=jenkins.addons-893295 san=[127.0.0.1 192.168.49.2 addons-893295 localhost minikube]
	I1202 19:55:06.772196  412831 provision.go:177] copyRemoteCerts
	I1202 19:55:06.772260  412831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:55:06.772296  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:06.792279  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:06.893428  412831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 19:55:06.913887  412831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 19:55:06.932439  412831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:55:06.951739  412831 provision.go:87] duration metric: took 330.156948ms to configureAuth
	I1202 19:55:06.951774  412831 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:55:06.952010  412831 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:06.952149  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:06.971190  412831 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:06.971456  412831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1202 19:55:06.971474  412831 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:55:07.253416  412831 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:55:07.253445  412831 machine.go:97] duration metric: took 4.137328753s to provisionDockerMachine
	I1202 19:55:07.253457  412831 client.go:176] duration metric: took 12.517596549s to LocalClient.Create
	I1202 19:55:07.253473  412831 start.go:167] duration metric: took 12.517661857s to libmachine.API.Create "addons-893295"
	I1202 19:55:07.253481  412831 start.go:293] postStartSetup for "addons-893295" (driver="docker")
	I1202 19:55:07.253490  412831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:55:07.253542  412831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:55:07.253580  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:07.272132  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:07.374390  412831 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:55:07.378013  412831 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:55:07.378058  412831 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:55:07.378087  412831 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 19:55:07.378162  412831 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 19:55:07.378200  412831 start.go:296] duration metric: took 124.711475ms for postStartSetup
	I1202 19:55:07.378551  412831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-893295
	I1202 19:55:07.396607  412831 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/config.json ...
	I1202 19:55:07.396907  412831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:55:07.396956  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:07.414819  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:07.512647  412831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:55:07.517612  412831 start.go:128] duration metric: took 12.783929128s to createHost
	I1202 19:55:07.517651  412831 start.go:83] releasing machines lock for "addons-893295", held for 12.784076086s
	I1202 19:55:07.517749  412831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-893295
	I1202 19:55:07.536778  412831 ssh_runner.go:195] Run: cat /version.json
	I1202 19:55:07.536840  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:07.536845  412831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:55:07.536937  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:07.556216  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:07.556702  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:07.708228  412831 ssh_runner.go:195] Run: systemctl --version
	I1202 19:55:07.715149  412831 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:55:07.750385  412831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:55:07.755412  412831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:55:07.755484  412831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:55:07.783980  412831 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 19:55:07.784010  412831 start.go:496] detecting cgroup driver to use...
	I1202 19:55:07.784052  412831 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 19:55:07.784133  412831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:55:07.801289  412831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:55:07.814283  412831 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:55:07.814348  412831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:55:07.831847  412831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:55:07.850341  412831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:55:07.933038  412831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:55:08.021963  412831 docker.go:234] disabling docker service ...
	I1202 19:55:08.022044  412831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:55:08.041032  412831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:55:08.054642  412831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:55:08.137632  412831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:55:08.220632  412831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:55:08.233740  412831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:55:08.248856  412831 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:55:08.248925  412831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:08.260653  412831 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 19:55:08.260720  412831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:08.270455  412831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:08.280043  412831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:08.289807  412831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:55:08.298915  412831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:08.308358  412831 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:08.322943  412831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:08.332807  412831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:55:08.340604  412831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:55:08.348826  412831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:55:08.428301  412831 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:55:08.561309  412831 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:55:08.561395  412831 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:55:08.565598  412831 start.go:564] Will wait 60s for crictl version
	I1202 19:55:08.565656  412831 ssh_runner.go:195] Run: which crictl
	I1202 19:55:08.569697  412831 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:55:08.597089  412831 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:55:08.597176  412831 ssh_runner.go:195] Run: crio --version
	I1202 19:55:08.626472  412831 ssh_runner.go:195] Run: crio --version
	I1202 19:55:08.658250  412831 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:55:08.659939  412831 cli_runner.go:164] Run: docker network inspect addons-893295 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:55:08.677856  412831 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:55:08.682103  412831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:55:08.692602  412831 kubeadm.go:884] updating cluster {Name:addons-893295 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-893295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 19:55:08.692734  412831 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:55:08.692780  412831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:55:08.724755  412831 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:55:08.724779  412831 crio.go:433] Images already preloaded, skipping extraction
	I1202 19:55:08.724834  412831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:55:08.750165  412831 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:55:08.750189  412831 cache_images.go:86] Images are preloaded, skipping loading
	I1202 19:55:08.750199  412831 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1202 19:55:08.750330  412831 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-893295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-893295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:55:08.750418  412831 ssh_runner.go:195] Run: crio config
	I1202 19:55:08.797293  412831 cni.go:84] Creating CNI manager for ""
	I1202 19:55:08.797319  412831 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:55:08.797339  412831 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 19:55:08.797367  412831 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-893295 NodeName:addons-893295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 19:55:08.797504  412831 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-893295"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 19:55:08.797589  412831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:55:08.806141  412831 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:55:08.806215  412831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 19:55:08.814502  412831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 19:55:08.828195  412831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:55:08.844151  412831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1202 19:55:08.857737  412831 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 19:55:08.861578  412831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:55:08.872216  412831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:55:08.950351  412831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:55:08.974685  412831 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295 for IP: 192.168.49.2
	I1202 19:55:08.974711  412831 certs.go:195] generating shared ca certs ...
	I1202 19:55:08.974731  412831 certs.go:227] acquiring lock for ca certs: {Name:mkea11924193dd4dc459e16d70207bde383196df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:08.974887  412831 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key
	I1202 19:55:09.190324  412831 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt ...
	I1202 19:55:09.190366  412831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt: {Name:mk3b995f99d1d87432666ba663c87cd170b0d45e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:09.190625  412831 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key ...
	I1202 19:55:09.190649  412831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key: {Name:mkf5b188ab09a4301c9639eae09b9b97499c97f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:09.190779  412831 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key
	I1202 19:55:09.297834  412831 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt ...
	I1202 19:55:09.297877  412831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt: {Name:mk00dc72744d82467866a30b889d56ba015b653a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:09.298131  412831 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key ...
	I1202 19:55:09.298152  412831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key: {Name:mk1a2c24e9cddc950384320ea1a06283a2afe5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:09.298273  412831 certs.go:257] generating profile certs ...
	I1202 19:55:09.298386  412831 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.key
	I1202 19:55:09.298407  412831 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt with IP's: []
	I1202 19:55:09.506249  412831 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt ...
	I1202 19:55:09.506287  412831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: {Name:mkae91d3e3f021742810a61285095b3b97621504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:09.506472  412831 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.key ...
	I1202 19:55:09.506487  412831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.key: {Name:mk8c5ce4c85c9f45100bd5dbcecca0cdda41ceea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:09.506570  412831 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/apiserver.key.159d5c69
	I1202 19:55:09.506590  412831 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/apiserver.crt.159d5c69 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1202 19:55:09.595757  412831 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/apiserver.crt.159d5c69 ...
	I1202 19:55:09.595801  412831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/apiserver.crt.159d5c69: {Name:mk95e9c3c58b870a262a683dd3e41ccd67ea9368 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:09.595969  412831 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/apiserver.key.159d5c69 ...
	I1202 19:55:09.595982  412831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/apiserver.key.159d5c69: {Name:mk7e8370a840617572b29fad6cafa3d079b47f6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:09.596052  412831 certs.go:382] copying /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/apiserver.crt.159d5c69 -> /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/apiserver.crt
	I1202 19:55:09.596178  412831 certs.go:386] copying /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/apiserver.key.159d5c69 -> /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/apiserver.key
	I1202 19:55:09.596238  412831 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/proxy-client.key
	I1202 19:55:09.596263  412831 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/proxy-client.crt with IP's: []
	I1202 19:55:09.738826  412831 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/proxy-client.crt ...
	I1202 19:55:09.738859  412831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/proxy-client.crt: {Name:mkce0c088d80376cc5c2a26e657f973c5fcb8f04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:09.739036  412831 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/proxy-client.key ...
	I1202 19:55:09.739050  412831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/proxy-client.key: {Name:mka340db84365af4e52e952419508f47449397f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:09.739246  412831 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 19:55:09.739299  412831 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:55:09.739326  412831 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:55:09.739350  412831 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem (1675 bytes)
	I1202 19:55:09.740027  412831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:55:09.759656  412831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:55:09.778604  412831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:55:09.797882  412831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 19:55:09.817362  412831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 19:55:09.836277  412831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 19:55:09.855507  412831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:55:09.875785  412831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:55:09.896362  412831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:55:09.918539  412831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 19:55:09.932766  412831 ssh_runner.go:195] Run: openssl version
	I1202 19:55:09.939387  412831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:55:09.951729  412831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:55:09.955840  412831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:55:09.955911  412831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:55:09.991210  412831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:55:10.001439  412831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:55:10.005695  412831 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 19:55:10.005754  412831 kubeadm.go:401] StartCluster: {Name:addons-893295 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-893295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:55:10.005838  412831 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:55:10.005898  412831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:55:10.036256  412831 cri.go:89] found id: ""
	I1202 19:55:10.036323  412831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 19:55:10.044848  412831 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 19:55:10.053098  412831 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 19:55:10.053159  412831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 19:55:10.061270  412831 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 19:55:10.061290  412831 kubeadm.go:158] found existing configuration files:
	
	I1202 19:55:10.061332  412831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 19:55:10.069972  412831 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 19:55:10.070031  412831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 19:55:10.077899  412831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 19:55:10.086424  412831 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 19:55:10.086493  412831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 19:55:10.094133  412831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 19:55:10.102314  412831 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 19:55:10.102394  412831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 19:55:10.110490  412831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 19:55:10.118638  412831 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 19:55:10.118714  412831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 19:55:10.126771  412831 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 19:55:10.186476  412831 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1202 19:55:10.246526  412831 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 19:55:19.778704  412831 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1202 19:55:19.778787  412831 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 19:55:19.778901  412831 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 19:55:19.778985  412831 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1202 19:55:19.779047  412831 kubeadm.go:319] OS: Linux
	I1202 19:55:19.779132  412831 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 19:55:19.779223  412831 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 19:55:19.779329  412831 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 19:55:19.779432  412831 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 19:55:19.779513  412831 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 19:55:19.779591  412831 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 19:55:19.779671  412831 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 19:55:19.779738  412831 kubeadm.go:319] CGROUPS_IO: enabled
	I1202 19:55:19.779851  412831 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 19:55:19.779962  412831 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 19:55:19.780123  412831 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 19:55:19.780191  412831 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 19:55:19.782154  412831 out.go:252]   - Generating certificates and keys ...
	I1202 19:55:19.782262  412831 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 19:55:19.782375  412831 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 19:55:19.782444  412831 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 19:55:19.782515  412831 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 19:55:19.782567  412831 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 19:55:19.782610  412831 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 19:55:19.782666  412831 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 19:55:19.782812  412831 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-893295 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 19:55:19.782870  412831 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 19:55:19.782970  412831 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-893295 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 19:55:19.783033  412831 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 19:55:19.783104  412831 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 19:55:19.783186  412831 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 19:55:19.783286  412831 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 19:55:19.783356  412831 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 19:55:19.783430  412831 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 19:55:19.783499  412831 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 19:55:19.783593  412831 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 19:55:19.783680  412831 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 19:55:19.783748  412831 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 19:55:19.783810  412831 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 19:55:19.785689  412831 out.go:252]   - Booting up control plane ...
	I1202 19:55:19.785837  412831 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 19:55:19.785955  412831 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 19:55:19.786051  412831 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 19:55:19.786186  412831 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 19:55:19.786342  412831 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 19:55:19.786536  412831 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 19:55:19.786645  412831 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 19:55:19.786689  412831 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 19:55:19.786820  412831 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 19:55:19.786926  412831 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 19:55:19.786985  412831 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001087089s
	I1202 19:55:19.787086  412831 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 19:55:19.787159  412831 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1202 19:55:19.787232  412831 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 19:55:19.787300  412831 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1202 19:55:19.787364  412831 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.80207997s
	I1202 19:55:19.787417  412831 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.024506725s
	I1202 19:55:19.787471  412831 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001062315s
	I1202 19:55:19.787567  412831 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 19:55:19.787739  412831 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 19:55:19.787821  412831 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 19:55:19.788017  412831 kubeadm.go:319] [mark-control-plane] Marking the node addons-893295 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 19:55:19.788097  412831 kubeadm.go:319] [bootstrap-token] Using token: tkytdp.l0r3f1mch4ddid0g
	I1202 19:55:19.789689  412831 out.go:252]   - Configuring RBAC rules ...
	I1202 19:55:19.789795  412831 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 19:55:19.789880  412831 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 19:55:19.790012  412831 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 19:55:19.790154  412831 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 19:55:19.790299  412831 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 19:55:19.790424  412831 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 19:55:19.790581  412831 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 19:55:19.790644  412831 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 19:55:19.790700  412831 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 19:55:19.790707  412831 kubeadm.go:319] 
	I1202 19:55:19.790757  412831 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 19:55:19.790763  412831 kubeadm.go:319] 
	I1202 19:55:19.790826  412831 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 19:55:19.790833  412831 kubeadm.go:319] 
	I1202 19:55:19.790855  412831 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 19:55:19.790906  412831 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 19:55:19.790951  412831 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 19:55:19.790956  412831 kubeadm.go:319] 
	I1202 19:55:19.791009  412831 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 19:55:19.791017  412831 kubeadm.go:319] 
	I1202 19:55:19.791059  412831 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 19:55:19.791078  412831 kubeadm.go:319] 
	I1202 19:55:19.791125  412831 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 19:55:19.791224  412831 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 19:55:19.791293  412831 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 19:55:19.791311  412831 kubeadm.go:319] 
	I1202 19:55:19.791393  412831 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 19:55:19.791476  412831 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 19:55:19.791486  412831 kubeadm.go:319] 
	I1202 19:55:19.791622  412831 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tkytdp.l0r3f1mch4ddid0g \
	I1202 19:55:19.791779  412831 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:48349ffa2369b87cd15480ece56edb1c5c5d97a828930f9dfbf1d1ceccf80ff4 \
	I1202 19:55:19.791809  412831 kubeadm.go:319] 	--control-plane 
	I1202 19:55:19.791813  412831 kubeadm.go:319] 
	I1202 19:55:19.791883  412831 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 19:55:19.791892  412831 kubeadm.go:319] 
	I1202 19:55:19.791964  412831 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tkytdp.l0r3f1mch4ddid0g \
	I1202 19:55:19.792096  412831 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:48349ffa2369b87cd15480ece56edb1c5c5d97a828930f9dfbf1d1ceccf80ff4 
	I1202 19:55:19.792113  412831 cni.go:84] Creating CNI manager for ""
	I1202 19:55:19.792123  412831 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:55:19.794028  412831 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1202 19:55:19.795583  412831 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 19:55:19.800666  412831 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1202 19:55:19.800693  412831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 19:55:19.815448  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 19:55:20.039869  412831 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 19:55:20.039955  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:55:20.039961  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-893295 minikube.k8s.io/updated_at=2025_12_02T19_55_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92 minikube.k8s.io/name=addons-893295 minikube.k8s.io/primary=true
	I1202 19:55:20.051840  412831 ops.go:34] apiserver oom_adj: -16
	I1202 19:55:20.118951  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:55:20.619723  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:55:21.119446  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:55:21.619614  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:55:22.119291  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:55:22.619810  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:55:23.119576  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:55:23.619291  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:55:24.119861  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:55:24.619924  412831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:55:24.687062  412831 kubeadm.go:1114] duration metric: took 4.64717155s to wait for elevateKubeSystemPrivileges
	I1202 19:55:24.687121  412831 kubeadm.go:403] duration metric: took 14.681374363s to StartCluster
	I1202 19:55:24.687150  412831 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:24.687266  412831 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 19:55:24.687672  412831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:24.687895  412831 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 19:55:24.687891  412831 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:55:24.687910  412831 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1202 19:55:24.688125  412831 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:24.688141  412831 addons.go:70] Setting default-storageclass=true in profile "addons-893295"
	I1202 19:55:24.688171  412831 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-893295"
	I1202 19:55:24.688187  412831 addons.go:70] Setting gcp-auth=true in profile "addons-893295"
	I1202 19:55:24.688194  412831 addons.go:70] Setting cloud-spanner=true in profile "addons-893295"
	I1202 19:55:24.688213  412831 addons.go:70] Setting ingress-dns=true in profile "addons-893295"
	I1202 19:55:24.688220  412831 addons.go:70] Setting registry-creds=true in profile "addons-893295"
	I1202 19:55:24.688229  412831 addons.go:239] Setting addon cloud-spanner=true in "addons-893295"
	I1202 19:55:24.688240  412831 addons.go:70] Setting storage-provisioner=true in profile "addons-893295"
	I1202 19:55:24.688241  412831 addons.go:70] Setting ingress=true in profile "addons-893295"
	I1202 19:55:24.688250  412831 addons.go:239] Setting addon registry-creds=true in "addons-893295"
	I1202 19:55:24.688259  412831 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-893295"
	I1202 19:55:24.688265  412831 addons.go:239] Setting addon ingress=true in "addons-893295"
	I1202 19:55:24.688250  412831 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-893295"
	I1202 19:55:24.688279  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.688289  412831 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-893295"
	I1202 19:55:24.688291  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.688354  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.688368  412831 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-893295"
	I1202 19:55:24.688406  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.688519  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.688545  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.688767  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.688819  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.688844  412831 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-893295"
	I1202 19:55:24.688849  412831 addons.go:70] Setting volcano=true in profile "addons-893295"
	I1202 19:55:24.688874  412831 addons.go:239] Setting addon volcano=true in "addons-893295"
	I1202 19:55:24.688877  412831 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-893295"
	I1202 19:55:24.688901  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.688914  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.688948  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.688948  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.689387  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.688176  412831 addons.go:70] Setting yakd=true in profile "addons-893295"
	I1202 19:55:24.690604  412831 addons.go:239] Setting addon yakd=true in "addons-893295"
	I1202 19:55:24.690637  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.691157  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.688207  412831 mustload.go:66] Loading cluster: addons-893295
	I1202 19:55:24.691560  412831 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:24.691830  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.688231  412831 addons.go:239] Setting addon ingress-dns=true in "addons-893295"
	I1202 19:55:24.692028  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.692563  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.693149  412831 out.go:179] * Verifying Kubernetes components...
	I1202 19:55:24.689453  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.689518  412831 addons.go:70] Setting volumesnapshots=true in profile "addons-893295"
	I1202 19:55:24.694596  412831 addons.go:239] Setting addon volumesnapshots=true in "addons-893295"
	I1202 19:55:24.694655  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.689630  412831 addons.go:70] Setting inspektor-gadget=true in profile "addons-893295"
	I1202 19:55:24.694918  412831 addons.go:239] Setting addon inspektor-gadget=true in "addons-893295"
	I1202 19:55:24.694945  412831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:55:24.694966  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.689621  412831 addons.go:70] Setting registry=true in profile "addons-893295"
	I1202 19:55:24.695114  412831 addons.go:239] Setting addon registry=true in "addons-893295"
	I1202 19:55:24.695145  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.689650  412831 addons.go:70] Setting metrics-server=true in profile "addons-893295"
	I1202 19:55:24.695329  412831 addons.go:239] Setting addon metrics-server=true in "addons-893295"
	I1202 19:55:24.695372  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.695944  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.688251  412831 addons.go:239] Setting addon storage-provisioner=true in "addons-893295"
	I1202 19:55:24.689660  412831 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-893295"
	I1202 19:55:24.696280  412831 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-893295"
	I1202 19:55:24.696284  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.696310  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.696419  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.697661  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.702499  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.704613  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.704757  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	W1202 19:55:24.757879  412831 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1202 19:55:24.759015  412831 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-893295"
	I1202 19:55:24.759087  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.759584  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.771330  412831 addons.go:239] Setting addon default-storageclass=true in "addons-893295"
	I1202 19:55:24.771388  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.771854  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:24.771889  412831 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1202 19:55:24.772648  412831 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1202 19:55:24.773121  412831 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1202 19:55:24.773139  412831 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1202 19:55:24.773212  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.778638  412831 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1202 19:55:24.779809  412831 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1202 19:55:24.783353  412831 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1202 19:55:24.783540  412831 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1202 19:55:24.783643  412831 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1202 19:55:24.784723  412831 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1202 19:55:24.788944  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1202 19:55:24.789028  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.790849  412831 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1202 19:55:24.790909  412831 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1202 19:55:24.790952  412831 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 19:55:24.791497  412831 out.go:179]   - Using image docker.io/registry:3.0.0
	I1202 19:55:24.792241  412831 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 19:55:24.792260  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1202 19:55:24.792325  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.796769  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:24.798146  412831 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1202 19:55:24.799188  412831 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1202 19:55:24.799555  412831 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 19:55:24.799227  412831 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1202 19:55:24.799918  412831 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1202 19:55:24.800221  412831 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1202 19:55:24.800238  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1202 19:55:24.800345  412831 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1202 19:55:24.800362  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1202 19:55:24.800390  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.800429  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.800545  412831 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1202 19:55:24.800572  412831 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1202 19:55:24.800621  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.801056  412831 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 19:55:24.801346  412831 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1202 19:55:24.801415  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1202 19:55:24.801499  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.801711  412831 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 19:55:24.801871  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1202 19:55:24.802056  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.802120  412831 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1202 19:55:24.802230  412831 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:55:24.802463  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 19:55:24.802522  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.804813  412831 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1202 19:55:24.806541  412831 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1202 19:55:24.808968  412831 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1202 19:55:24.809595  412831 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1202 19:55:24.809609  412831 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1202 19:55:24.809618  412831 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1202 19:55:24.809692  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.811671  412831 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 19:55:24.811695  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1202 19:55:24.811759  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.814202  412831 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 19:55:24.814231  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1202 19:55:24.814303  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.827402  412831 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 19:55:24.827429  412831 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 19:55:24.827489  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.830165  412831 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1202 19:55:24.831474  412831 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1202 19:55:24.831501  412831 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1202 19:55:24.831575  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.846063  412831 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 19:55:24.847959  412831 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1202 19:55:24.849436  412831 out.go:179]   - Using image docker.io/busybox:stable
	I1202 19:55:24.850602  412831 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 19:55:24.850672  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1202 19:55:24.850772  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:24.852930  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.854741  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.872500  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.890953  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.891121  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.891679  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.892393  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.893916  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.895275  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.894771  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.901582  412831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:55:24.908003  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.908427  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.911951  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	W1202 19:55:24.915727  412831 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1202 19:55:24.915871  412831 retry.go:31] will retry after 340.994629ms: ssh: handshake failed: EOF
	I1202 19:55:24.921948  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:24.926260  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:25.016289  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 19:55:25.036438  412831 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1202 19:55:25.036535  412831 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1202 19:55:25.048578  412831 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1202 19:55:25.048609  412831 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1202 19:55:25.060271  412831 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1202 19:55:25.060309  412831 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1202 19:55:25.060644  412831 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1202 19:55:25.060665  412831 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1202 19:55:25.065373  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 19:55:25.066642  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:55:25.079047  412831 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1202 19:55:25.079089  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1202 19:55:25.082777  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1202 19:55:25.088726  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 19:55:25.099612  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1202 19:55:25.101549  412831 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1202 19:55:25.101601  412831 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1202 19:55:25.101865  412831 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1202 19:55:25.101886  412831 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1202 19:55:25.105336  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 19:55:25.107022  412831 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1202 19:55:25.107049  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1202 19:55:25.108553  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1202 19:55:25.114559  412831 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1202 19:55:25.114583  412831 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1202 19:55:25.117681  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:55:25.136645  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1202 19:55:25.146338  412831 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1202 19:55:25.146386  412831 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1202 19:55:25.151639  412831 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1202 19:55:25.151666  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1202 19:55:25.152917  412831 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1202 19:55:25.152999  412831 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1202 19:55:25.167533  412831 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1202 19:55:25.167642  412831 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1202 19:55:25.187828  412831 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 19:55:25.187858  412831 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1202 19:55:25.199252  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1202 19:55:25.199885  412831 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1202 19:55:25.199948  412831 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1202 19:55:25.227865  412831 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1202 19:55:25.229960  412831 node_ready.go:35] waiting up to 6m0s for node "addons-893295" to be "Ready" ...
	I1202 19:55:25.238768  412831 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1202 19:55:25.238800  412831 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1202 19:55:25.253319  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 19:55:25.272639  412831 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1202 19:55:25.272739  412831 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1202 19:55:25.307846  412831 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 19:55:25.307870  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1202 19:55:25.336100  412831 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1202 19:55:25.336218  412831 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1202 19:55:25.380585  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 19:55:25.398469  412831 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1202 19:55:25.398498  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1202 19:55:25.480662  412831 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1202 19:55:25.480696  412831 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1202 19:55:25.520218  412831 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1202 19:55:25.520243  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1202 19:55:25.556899  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 19:55:25.588940  412831 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1202 19:55:25.588977  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1202 19:55:25.627336  412831 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1202 19:55:25.627391  412831 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1202 19:55:25.702577  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1202 19:55:25.749908  412831 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-893295" context rescaled to 1 replicas
	I1202 19:55:26.091479  412831 addons.go:495] Verifying addon registry=true in "addons-893295"
	I1202 19:55:26.091806  412831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.00303983s)
	I1202 19:55:26.093482  412831 out.go:179] * Verifying registry addon...
	I1202 19:55:26.095516  412831 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1202 19:55:26.108287  412831 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1202 19:55:26.108315  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:26.141423  412831 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-893295 service yakd-dashboard -n yakd-dashboard
	
	I1202 19:55:26.148403  412831 addons.go:495] Verifying addon metrics-server=true in "addons-893295"
	I1202 19:55:26.599346  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:26.829446  412831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.4488126s)
	W1202 19:55:26.829513  412831 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 19:55:26.829544  412831 retry.go:31] will retry after 323.685556ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 19:55:26.829599  412831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.272656515s)
	I1202 19:55:26.829631  412831 addons.go:495] Verifying addon ingress=true in "addons-893295"
	I1202 19:55:26.829868  412831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.127249428s)
	I1202 19:55:26.829904  412831 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-893295"
	I1202 19:55:26.832320  412831 out.go:179] * Verifying ingress addon...
	I1202 19:55:26.832398  412831 out.go:179] * Verifying csi-hostpath-driver addon...
	I1202 19:55:26.836516  412831 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1202 19:55:26.836520  412831 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1202 19:55:26.844670  412831 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1202 19:55:26.844699  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:26.844894  412831 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1202 19:55:26.844917  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:27.099740  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:27.153957  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1202 19:55:27.233300  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:27.341296  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:27.341333  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:27.599775  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:27.840421  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:27.840540  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:28.098823  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:28.341030  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:28.341063  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:28.599780  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:28.841120  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:28.841211  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:29.099914  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:29.233365  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:29.341004  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:29.341026  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:29.599634  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:29.663386  412831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.50937655s)
	I1202 19:55:29.840475  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:29.840488  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:30.099616  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:30.341004  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:30.341163  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:30.599376  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:30.840276  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:30.840370  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:31.099916  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:31.233455  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:31.340978  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:31.340997  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:31.598747  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:31.840699  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:31.840761  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:32.099800  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:32.340182  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:32.340271  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:32.405984  412831 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1202 19:55:32.406059  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:32.425316  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:32.533639  412831 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1202 19:55:32.547299  412831 addons.go:239] Setting addon gcp-auth=true in "addons-893295"
	I1202 19:55:32.547371  412831 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:55:32.547756  412831 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:55:32.566179  412831 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1202 19:55:32.566233  412831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:55:32.585787  412831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:55:32.599653  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:32.686920  412831 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 19:55:32.688240  412831 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1202 19:55:32.689414  412831 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1202 19:55:32.689442  412831 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1202 19:55:32.704413  412831 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1202 19:55:32.704443  412831 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1202 19:55:32.717871  412831 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 19:55:32.717896  412831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1202 19:55:32.731351  412831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 19:55:32.840586  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:32.840636  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:33.057781  412831 addons.go:495] Verifying addon gcp-auth=true in "addons-893295"
	I1202 19:55:33.062217  412831 out.go:179] * Verifying gcp-auth addon...
	I1202 19:55:33.064810  412831 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1202 19:55:33.067625  412831 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1202 19:55:33.067647  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:33.099310  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:33.340103  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:33.340412  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:33.567677  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:33.598802  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:33.733681  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:33.839767  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:33.839772  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:34.069223  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:34.099158  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:34.340482  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:34.340669  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:34.568584  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:34.599618  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:34.839833  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:34.839883  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:35.069008  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:35.098759  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:35.340094  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:35.340192  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:35.568090  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:35.598970  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:35.733920  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:35.840181  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:35.840201  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:36.068822  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:36.098699  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:36.339674  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:36.339734  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:36.568603  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:36.599153  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:36.840759  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:36.840773  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:37.068759  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:37.098671  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:37.340597  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:37.340728  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:37.568909  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:37.598780  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:37.839665  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:37.839727  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:38.068981  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:38.098912  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:38.233691  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:38.340039  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:38.340100  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:38.568063  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:38.598881  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:38.839244  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:38.839297  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:39.068222  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:39.098955  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:39.340161  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:39.340306  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:39.568546  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:39.599535  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:39.839744  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:39.839876  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:40.067864  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:40.098925  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:40.234051  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:40.340295  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:40.340352  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:40.568319  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:40.599280  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:40.840062  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:40.840207  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:41.067801  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:41.098792  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:41.339870  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:41.339960  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:41.567865  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:41.598837  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:41.840119  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:41.840143  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:42.068265  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:42.169469  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:42.340140  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:42.340188  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:42.567941  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:42.598664  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:42.733776  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:42.840157  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:42.840199  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:43.067676  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:43.098324  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:43.340373  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:43.340430  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:43.568318  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:43.599111  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:43.839938  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:43.840042  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:44.068386  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:44.099490  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:44.340432  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:44.340498  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:44.568336  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:44.599168  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:44.840582  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:44.840622  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:45.068650  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:45.098516  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:45.233473  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:45.340886  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:45.340882  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:45.568137  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:45.598962  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:45.839754  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:45.839879  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:46.069024  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:46.099278  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:46.340490  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:46.340561  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:46.568880  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:46.598670  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:46.840392  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:46.840565  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:47.068737  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:47.098492  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:47.233865  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:47.339905  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:47.339950  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:47.567747  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:47.598811  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:47.840845  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:47.840869  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:48.068891  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:48.098853  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:48.340337  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:48.340369  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:48.568229  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:48.599308  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:48.840253  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:48.840446  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:49.068129  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:49.098796  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:49.340145  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:49.340242  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:49.568308  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:49.599233  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:49.733090  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:49.840522  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:49.840717  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:50.071239  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:50.099169  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:50.340137  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:50.340339  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:50.567901  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:50.598840  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:50.840155  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:50.840179  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:51.068413  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:51.099585  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:51.340029  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:51.340102  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:51.567985  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:51.598736  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:51.733683  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:51.839982  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:51.840123  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:52.068219  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:52.098904  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:52.340050  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:52.340100  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:52.567673  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:52.598530  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:52.840759  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:52.840895  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:53.067984  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:53.099039  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:53.339807  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:53.339996  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:53.567730  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:53.598735  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:53.840714  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:53.840723  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:54.068640  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:54.098605  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:54.233531  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:54.340593  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:54.340668  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:54.568616  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:54.598700  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:54.839990  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:54.840222  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:55.068064  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:55.099027  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:55.340316  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:55.340527  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:55.568496  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:55.599815  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:55.839663  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:55.839762  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:56.068035  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:56.098846  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:56.233686  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:56.339861  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:56.339933  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:56.567976  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:56.598854  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:56.840347  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:56.840545  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:57.068370  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:57.099287  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:57.340591  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:57.340657  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:57.568563  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:57.599634  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:57.840658  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:57.840847  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:58.069106  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:58.099023  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:58.340340  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:58.340377  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:58.568686  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:58.598607  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:55:58.733812  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:55:58.839935  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:58.839997  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:59.067792  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:59.098963  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:59.340356  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:59.340523  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:55:59.568917  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:55:59.598724  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:55:59.839969  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:55:59.840113  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:00.068210  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:00.099436  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:00.340813  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:00.340877  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:00.568932  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:00.598791  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:00.839674  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:00.839754  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:01.068793  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:01.098709  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:56:01.233846  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:56:01.340105  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:01.340154  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:01.568435  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:01.599637  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:01.839944  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:01.840061  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:02.068381  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:02.099445  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:02.340607  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:02.340752  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:02.568713  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:02.598747  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:02.840544  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:02.840646  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:03.068858  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:03.098852  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:03.339824  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:03.339927  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:03.567799  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:03.599129  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:56:03.733059  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:56:03.840292  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:03.840377  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:04.068346  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:04.099413  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:04.340480  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:04.340546  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:04.568640  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:04.598401  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:04.839770  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:04.839782  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:05.068782  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:05.098659  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:05.340432  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:05.340505  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:05.568811  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:05.598888  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1202 19:56:05.734101  412831 node_ready.go:57] node "addons-893295" has "Ready":"False" status (will retry)
	I1202 19:56:05.840110  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:05.840351  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:06.068458  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:06.099540  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:06.233255  412831 node_ready.go:49] node "addons-893295" is "Ready"
	I1202 19:56:06.233292  412831 node_ready.go:38] duration metric: took 41.003298051s for node "addons-893295" to be "Ready" ...
	I1202 19:56:06.233313  412831 api_server.go:52] waiting for apiserver process to appear ...
	I1202 19:56:06.233377  412831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:56:06.250258  412831 api_server.go:72] duration metric: took 41.562259874s to wait for apiserver process to appear ...
	I1202 19:56:06.250290  412831 api_server.go:88] waiting for apiserver healthz status ...
	I1202 19:56:06.250319  412831 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:56:06.255555  412831 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1202 19:56:06.256602  412831 api_server.go:141] control plane version: v1.34.2
	I1202 19:56:06.256629  412831 api_server.go:131] duration metric: took 6.328947ms to wait for apiserver health ...
	I1202 19:56:06.256639  412831 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 19:56:06.260222  412831 system_pods.go:59] 20 kube-system pods found
	I1202 19:56:06.260262  412831 system_pods.go:61] "amd-gpu-device-plugin-nklpz" [9d4535df-fe2e-4f5a-8273-23b1b3e6d8b8] Pending
	I1202 19:56:06.260276  412831 system_pods.go:61] "coredns-66bc5c9577-9mvmk" [ca5a6890-e2db-40a3-8302-3fcc4309e66a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:56:06.260286  412831 system_pods.go:61] "csi-hostpath-attacher-0" [86ea36d5-0952-4bf9-82dd-fb267c9a17fe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 19:56:06.260297  412831 system_pods.go:61] "csi-hostpath-resizer-0" [45b788c6-fc8c-49b8-883c-93d3160e893b] Pending
	I1202 19:56:06.260306  412831 system_pods.go:61] "csi-hostpathplugin-6h8dt" [782b735d-c731-4592-861f-0572e0581ce1] Pending
	I1202 19:56:06.260311  412831 system_pods.go:61] "etcd-addons-893295" [12b09750-804b-410b-8096-afb7db0b7cff] Running
	I1202 19:56:06.260320  412831 system_pods.go:61] "kindnet-bphsd" [035c64e4-9b5a-4fb5-9129-c78c186861ad] Running
	I1202 19:56:06.260324  412831 system_pods.go:61] "kube-apiserver-addons-893295" [44cedc90-0e81-4707-be65-2031c2da26db] Running
	I1202 19:56:06.260340  412831 system_pods.go:61] "kube-controller-manager-addons-893295" [f2a70c97-80a7-4072-8e06-31fdc7b7e92f] Running
	I1202 19:56:06.260349  412831 system_pods.go:61] "kube-ingress-dns-minikube" [8a7095ba-44a5-4e5c-bec7-847ffd18dc36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 19:56:06.260357  412831 system_pods.go:61] "kube-proxy-2bxgd" [32906a03-9bcd-402b-948d-bcc65caa49fc] Running
	I1202 19:56:06.260363  412831 system_pods.go:61] "kube-scheduler-addons-893295" [85e7b347-9ab5-45c7-a9d3-2f9cdb139280] Running
	I1202 19:56:06.260373  412831 system_pods.go:61] "metrics-server-85b7d694d7-fbhzv" [51840c60-3fa9-4717-85ec-69d3082c6537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 19:56:06.260382  412831 system_pods.go:61] "nvidia-device-plugin-daemonset-bkjsl" [ee51e4e2-139f-407a-a020-b6a91e40e7bf] Pending
	I1202 19:56:06.260388  412831 system_pods.go:61] "registry-6b586f9694-86wz6" [8dd65e02-986d-4a9b-9796-d9014d33d6d4] Pending
	I1202 19:56:06.260398  412831 system_pods.go:61] "registry-creds-764b6fb674-qwrlk" [242299e3-e588-4f0a-890d-da4c53cafcce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 19:56:06.260408  412831 system_pods.go:61] "registry-proxy-stnrw" [e1efa7a9-b967-4abf-8104-14eb332f881f] Pending
	I1202 19:56:06.260414  412831 system_pods.go:61] "snapshot-controller-7d9fbc56b8-57ls2" [808c977f-4e69-4d1b-ba59-e82fe31100c7] Pending
	I1202 19:56:06.260422  412831 system_pods.go:61] "snapshot-controller-7d9fbc56b8-kwz4l" [7b98a4b5-96b5-4d1d-b6c6-983f165030db] Pending
	I1202 19:56:06.260430  412831 system_pods.go:61] "storage-provisioner" [d1b4b030-354a-45e2-aa34-ff9768a43e99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 19:56:06.260438  412831 system_pods.go:74] duration metric: took 3.792353ms to wait for pod list to return data ...
	I1202 19:56:06.260451  412831 default_sa.go:34] waiting for default service account to be created ...
	I1202 19:56:06.264275  412831 default_sa.go:45] found service account: "default"
	I1202 19:56:06.264306  412831 default_sa.go:55] duration metric: took 3.846399ms for default service account to be created ...
	I1202 19:56:06.264319  412831 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 19:56:06.267772  412831 system_pods.go:86] 20 kube-system pods found
	I1202 19:56:06.267806  412831 system_pods.go:89] "amd-gpu-device-plugin-nklpz" [9d4535df-fe2e-4f5a-8273-23b1b3e6d8b8] Pending
	I1202 19:56:06.267817  412831 system_pods.go:89] "coredns-66bc5c9577-9mvmk" [ca5a6890-e2db-40a3-8302-3fcc4309e66a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:56:06.267826  412831 system_pods.go:89] "csi-hostpath-attacher-0" [86ea36d5-0952-4bf9-82dd-fb267c9a17fe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 19:56:06.267836  412831 system_pods.go:89] "csi-hostpath-resizer-0" [45b788c6-fc8c-49b8-883c-93d3160e893b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 19:56:06.267842  412831 system_pods.go:89] "csi-hostpathplugin-6h8dt" [782b735d-c731-4592-861f-0572e0581ce1] Pending
	I1202 19:56:06.267848  412831 system_pods.go:89] "etcd-addons-893295" [12b09750-804b-410b-8096-afb7db0b7cff] Running
	I1202 19:56:06.267854  412831 system_pods.go:89] "kindnet-bphsd" [035c64e4-9b5a-4fb5-9129-c78c186861ad] Running
	I1202 19:56:06.267862  412831 system_pods.go:89] "kube-apiserver-addons-893295" [44cedc90-0e81-4707-be65-2031c2da26db] Running
	I1202 19:56:06.267867  412831 system_pods.go:89] "kube-controller-manager-addons-893295" [f2a70c97-80a7-4072-8e06-31fdc7b7e92f] Running
	I1202 19:56:06.267879  412831 system_pods.go:89] "kube-ingress-dns-minikube" [8a7095ba-44a5-4e5c-bec7-847ffd18dc36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 19:56:06.267884  412831 system_pods.go:89] "kube-proxy-2bxgd" [32906a03-9bcd-402b-948d-bcc65caa49fc] Running
	I1202 19:56:06.267894  412831 system_pods.go:89] "kube-scheduler-addons-893295" [85e7b347-9ab5-45c7-a9d3-2f9cdb139280] Running
	I1202 19:56:06.267901  412831 system_pods.go:89] "metrics-server-85b7d694d7-fbhzv" [51840c60-3fa9-4717-85ec-69d3082c6537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 19:56:06.267910  412831 system_pods.go:89] "nvidia-device-plugin-daemonset-bkjsl" [ee51e4e2-139f-407a-a020-b6a91e40e7bf] Pending
	I1202 19:56:06.267916  412831 system_pods.go:89] "registry-6b586f9694-86wz6" [8dd65e02-986d-4a9b-9796-d9014d33d6d4] Pending
	I1202 19:56:06.267925  412831 system_pods.go:89] "registry-creds-764b6fb674-qwrlk" [242299e3-e588-4f0a-890d-da4c53cafcce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 19:56:06.267932  412831 system_pods.go:89] "registry-proxy-stnrw" [e1efa7a9-b967-4abf-8104-14eb332f881f] Pending
	I1202 19:56:06.267941  412831 system_pods.go:89] "snapshot-controller-7d9fbc56b8-57ls2" [808c977f-4e69-4d1b-ba59-e82fe31100c7] Pending
	I1202 19:56:06.267946  412831 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kwz4l" [7b98a4b5-96b5-4d1d-b6c6-983f165030db] Pending
	I1202 19:56:06.267956  412831 system_pods.go:89] "storage-provisioner" [d1b4b030-354a-45e2-aa34-ff9768a43e99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 19:56:06.267978  412831 retry.go:31] will retry after 232.050934ms: missing components: kube-dns
	I1202 19:56:06.339633  412831 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1202 19:56:06.339661  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:06.339638  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:06.507281  412831 system_pods.go:86] 20 kube-system pods found
	I1202 19:56:06.507327  412831 system_pods.go:89] "amd-gpu-device-plugin-nklpz" [9d4535df-fe2e-4f5a-8273-23b1b3e6d8b8] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1202 19:56:06.507341  412831 system_pods.go:89] "coredns-66bc5c9577-9mvmk" [ca5a6890-e2db-40a3-8302-3fcc4309e66a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:56:06.507352  412831 system_pods.go:89] "csi-hostpath-attacher-0" [86ea36d5-0952-4bf9-82dd-fb267c9a17fe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 19:56:06.507362  412831 system_pods.go:89] "csi-hostpath-resizer-0" [45b788c6-fc8c-49b8-883c-93d3160e893b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 19:56:06.507370  412831 system_pods.go:89] "csi-hostpathplugin-6h8dt" [782b735d-c731-4592-861f-0572e0581ce1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 19:56:06.507376  412831 system_pods.go:89] "etcd-addons-893295" [12b09750-804b-410b-8096-afb7db0b7cff] Running
	I1202 19:56:06.507383  412831 system_pods.go:89] "kindnet-bphsd" [035c64e4-9b5a-4fb5-9129-c78c186861ad] Running
	I1202 19:56:06.507415  412831 system_pods.go:89] "kube-apiserver-addons-893295" [44cedc90-0e81-4707-be65-2031c2da26db] Running
	I1202 19:56:06.507426  412831 system_pods.go:89] "kube-controller-manager-addons-893295" [f2a70c97-80a7-4072-8e06-31fdc7b7e92f] Running
	I1202 19:56:06.507444  412831 system_pods.go:89] "kube-ingress-dns-minikube" [8a7095ba-44a5-4e5c-bec7-847ffd18dc36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 19:56:06.507453  412831 system_pods.go:89] "kube-proxy-2bxgd" [32906a03-9bcd-402b-948d-bcc65caa49fc] Running
	I1202 19:56:06.507460  412831 system_pods.go:89] "kube-scheduler-addons-893295" [85e7b347-9ab5-45c7-a9d3-2f9cdb139280] Running
	I1202 19:56:06.507468  412831 system_pods.go:89] "metrics-server-85b7d694d7-fbhzv" [51840c60-3fa9-4717-85ec-69d3082c6537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 19:56:06.507478  412831 system_pods.go:89] "nvidia-device-plugin-daemonset-bkjsl" [ee51e4e2-139f-407a-a020-b6a91e40e7bf] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 19:56:06.507487  412831 system_pods.go:89] "registry-6b586f9694-86wz6" [8dd65e02-986d-4a9b-9796-d9014d33d6d4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 19:56:06.507497  412831 system_pods.go:89] "registry-creds-764b6fb674-qwrlk" [242299e3-e588-4f0a-890d-da4c53cafcce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 19:56:06.507505  412831 system_pods.go:89] "registry-proxy-stnrw" [e1efa7a9-b967-4abf-8104-14eb332f881f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 19:56:06.507513  412831 system_pods.go:89] "snapshot-controller-7d9fbc56b8-57ls2" [808c977f-4e69-4d1b-ba59-e82fe31100c7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 19:56:06.507523  412831 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kwz4l" [7b98a4b5-96b5-4d1d-b6c6-983f165030db] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 19:56:06.507531  412831 system_pods.go:89] "storage-provisioner" [d1b4b030-354a-45e2-aa34-ff9768a43e99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 19:56:06.507555  412831 retry.go:31] will retry after 279.7801ms: missing components: kube-dns
	I1202 19:56:06.604867  412831 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1202 19:56:06.604893  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:06.604927  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:06.793536  412831 system_pods.go:86] 20 kube-system pods found
	I1202 19:56:06.793579  412831 system_pods.go:89] "amd-gpu-device-plugin-nklpz" [9d4535df-fe2e-4f5a-8273-23b1b3e6d8b8] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1202 19:56:06.793590  412831 system_pods.go:89] "coredns-66bc5c9577-9mvmk" [ca5a6890-e2db-40a3-8302-3fcc4309e66a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:56:06.793601  412831 system_pods.go:89] "csi-hostpath-attacher-0" [86ea36d5-0952-4bf9-82dd-fb267c9a17fe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 19:56:06.793610  412831 system_pods.go:89] "csi-hostpath-resizer-0" [45b788c6-fc8c-49b8-883c-93d3160e893b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 19:56:06.793619  412831 system_pods.go:89] "csi-hostpathplugin-6h8dt" [782b735d-c731-4592-861f-0572e0581ce1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 19:56:06.793630  412831 system_pods.go:89] "etcd-addons-893295" [12b09750-804b-410b-8096-afb7db0b7cff] Running
	I1202 19:56:06.793642  412831 system_pods.go:89] "kindnet-bphsd" [035c64e4-9b5a-4fb5-9129-c78c186861ad] Running
	I1202 19:56:06.793647  412831 system_pods.go:89] "kube-apiserver-addons-893295" [44cedc90-0e81-4707-be65-2031c2da26db] Running
	I1202 19:56:06.793655  412831 system_pods.go:89] "kube-controller-manager-addons-893295" [f2a70c97-80a7-4072-8e06-31fdc7b7e92f] Running
	I1202 19:56:06.793667  412831 system_pods.go:89] "kube-ingress-dns-minikube" [8a7095ba-44a5-4e5c-bec7-847ffd18dc36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 19:56:06.793674  412831 system_pods.go:89] "kube-proxy-2bxgd" [32906a03-9bcd-402b-948d-bcc65caa49fc] Running
	I1202 19:56:06.793680  412831 system_pods.go:89] "kube-scheduler-addons-893295" [85e7b347-9ab5-45c7-a9d3-2f9cdb139280] Running
	I1202 19:56:06.793688  412831 system_pods.go:89] "metrics-server-85b7d694d7-fbhzv" [51840c60-3fa9-4717-85ec-69d3082c6537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 19:56:06.793696  412831 system_pods.go:89] "nvidia-device-plugin-daemonset-bkjsl" [ee51e4e2-139f-407a-a020-b6a91e40e7bf] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 19:56:06.793708  412831 system_pods.go:89] "registry-6b586f9694-86wz6" [8dd65e02-986d-4a9b-9796-d9014d33d6d4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 19:56:06.793716  412831 system_pods.go:89] "registry-creds-764b6fb674-qwrlk" [242299e3-e588-4f0a-890d-da4c53cafcce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 19:56:06.793725  412831 system_pods.go:89] "registry-proxy-stnrw" [e1efa7a9-b967-4abf-8104-14eb332f881f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 19:56:06.793732  412831 system_pods.go:89] "snapshot-controller-7d9fbc56b8-57ls2" [808c977f-4e69-4d1b-ba59-e82fe31100c7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 19:56:06.793743  412831 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kwz4l" [7b98a4b5-96b5-4d1d-b6c6-983f165030db] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 19:56:06.793751  412831 system_pods.go:89] "storage-provisioner" [d1b4b030-354a-45e2-aa34-ff9768a43e99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 19:56:06.793774  412831 retry.go:31] will retry after 442.819697ms: missing components: kube-dns
	I1202 19:56:06.840501  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:06.840669  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:07.069732  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:07.100154  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:07.242766  412831 system_pods.go:86] 20 kube-system pods found
	I1202 19:56:07.242806  412831 system_pods.go:89] "amd-gpu-device-plugin-nklpz" [9d4535df-fe2e-4f5a-8273-23b1b3e6d8b8] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1202 19:56:07.242818  412831 system_pods.go:89] "coredns-66bc5c9577-9mvmk" [ca5a6890-e2db-40a3-8302-3fcc4309e66a] Running
	I1202 19:56:07.242830  412831 system_pods.go:89] "csi-hostpath-attacher-0" [86ea36d5-0952-4bf9-82dd-fb267c9a17fe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 19:56:07.242839  412831 system_pods.go:89] "csi-hostpath-resizer-0" [45b788c6-fc8c-49b8-883c-93d3160e893b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 19:56:07.242855  412831 system_pods.go:89] "csi-hostpathplugin-6h8dt" [782b735d-c731-4592-861f-0572e0581ce1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 19:56:07.242861  412831 system_pods.go:89] "etcd-addons-893295" [12b09750-804b-410b-8096-afb7db0b7cff] Running
	I1202 19:56:07.242867  412831 system_pods.go:89] "kindnet-bphsd" [035c64e4-9b5a-4fb5-9129-c78c186861ad] Running
	I1202 19:56:07.242872  412831 system_pods.go:89] "kube-apiserver-addons-893295" [44cedc90-0e81-4707-be65-2031c2da26db] Running
	I1202 19:56:07.242887  412831 system_pods.go:89] "kube-controller-manager-addons-893295" [f2a70c97-80a7-4072-8e06-31fdc7b7e92f] Running
	I1202 19:56:07.242896  412831 system_pods.go:89] "kube-ingress-dns-minikube" [8a7095ba-44a5-4e5c-bec7-847ffd18dc36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 19:56:07.242901  412831 system_pods.go:89] "kube-proxy-2bxgd" [32906a03-9bcd-402b-948d-bcc65caa49fc] Running
	I1202 19:56:07.242907  412831 system_pods.go:89] "kube-scheduler-addons-893295" [85e7b347-9ab5-45c7-a9d3-2f9cdb139280] Running
	I1202 19:56:07.242915  412831 system_pods.go:89] "metrics-server-85b7d694d7-fbhzv" [51840c60-3fa9-4717-85ec-69d3082c6537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 19:56:07.242923  412831 system_pods.go:89] "nvidia-device-plugin-daemonset-bkjsl" [ee51e4e2-139f-407a-a020-b6a91e40e7bf] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 19:56:07.242934  412831 system_pods.go:89] "registry-6b586f9694-86wz6" [8dd65e02-986d-4a9b-9796-d9014d33d6d4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 19:56:07.242942  412831 system_pods.go:89] "registry-creds-764b6fb674-qwrlk" [242299e3-e588-4f0a-890d-da4c53cafcce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 19:56:07.242950  412831 system_pods.go:89] "registry-proxy-stnrw" [e1efa7a9-b967-4abf-8104-14eb332f881f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 19:56:07.242958  412831 system_pods.go:89] "snapshot-controller-7d9fbc56b8-57ls2" [808c977f-4e69-4d1b-ba59-e82fe31100c7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 19:56:07.242968  412831 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kwz4l" [7b98a4b5-96b5-4d1d-b6c6-983f165030db] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 19:56:07.242973  412831 system_pods.go:89] "storage-provisioner" [d1b4b030-354a-45e2-aa34-ff9768a43e99] Running
	I1202 19:56:07.242986  412831 system_pods.go:126] duration metric: took 978.660182ms to wait for k8s-apps to be running ...
	I1202 19:56:07.242995  412831 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 19:56:07.243059  412831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:56:07.261962  412831 system_svc.go:56] duration metric: took 18.953254ms WaitForService to wait for kubelet
	I1202 19:56:07.262001  412831 kubeadm.go:587] duration metric: took 42.574009753s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:56:07.262028  412831 node_conditions.go:102] verifying NodePressure condition ...
	I1202 19:56:07.265514  412831 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 19:56:07.265541  412831 node_conditions.go:123] node cpu capacity is 8
	I1202 19:56:07.265558  412831 node_conditions.go:105] duration metric: took 3.525254ms to run NodePressure ...
	I1202 19:56:07.265573  412831 start.go:242] waiting for startup goroutines ...
	I1202 19:56:07.340815  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:07.340855  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:07.568918  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:07.599307  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:07.842830  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:07.842884  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:08.069425  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:08.099340  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:08.341255  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:08.341397  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:08.568725  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:08.599656  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:08.840164  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:08.840328  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:09.068704  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:09.098944  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:09.344266  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:09.344504  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:09.568987  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:09.599328  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:09.841730  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:09.841921  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:10.069056  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:10.099746  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:10.340362  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:10.340382  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:10.568858  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:10.599039  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:10.841423  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:10.842310  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:11.068254  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:11.099791  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:11.340413  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:11.340514  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:11.568533  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:11.599838  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:11.840480  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:11.840572  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:12.068780  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:12.099020  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:12.340521  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:12.340554  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:12.568895  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:12.599457  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:12.842263  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:12.842268  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:13.068500  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:13.099472  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:13.340917  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:13.340970  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:13.569179  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:13.599497  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:13.841521  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:13.841785  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:14.071641  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:14.098996  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:14.340999  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:14.341209  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:14.568019  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:14.598891  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:14.840738  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:14.840849  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:15.068579  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:15.099643  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:15.340261  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:15.340320  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:15.568710  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:15.669382  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:15.840826  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:15.840826  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:16.068685  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:16.098613  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:16.340116  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:16.340272  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:16.568119  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:16.599129  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:16.841300  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:16.841405  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:17.068718  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:17.098921  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:17.340635  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:17.340669  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:17.568885  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:17.599347  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:17.841122  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:17.841259  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:18.097236  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:18.098678  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:18.460302  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:18.460990  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:18.568582  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:18.599579  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:18.841814  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:18.841916  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:19.069160  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:19.099318  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:19.341132  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:19.341283  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:19.568630  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:19.598740  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:19.840321  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:19.840355  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:20.068921  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:20.099195  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:20.340606  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:20.340628  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:20.568373  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:20.599451  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:20.840166  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:20.840263  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:21.068272  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:21.099287  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:21.341896  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:21.342204  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:21.568573  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:21.599662  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:21.840232  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:21.840322  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:22.069160  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:22.099055  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:22.340697  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:22.340859  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:22.568684  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:22.598726  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:22.840963  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:22.841024  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:23.068045  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:23.099265  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:23.340933  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:23.340935  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:23.569138  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:23.599251  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:23.840365  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:23.840383  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:24.068540  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:24.099836  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:24.340486  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:24.340623  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:24.569135  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:24.598769  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:24.840789  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:24.840976  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:25.068882  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:25.098947  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:25.341123  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:25.341147  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:25.567881  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:25.598842  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:25.841233  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:25.841253  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:26.069607  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:26.099838  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:26.341045  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:26.341115  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:26.568922  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:26.598989  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:26.841452  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:26.841473  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:27.068875  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:27.099373  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:27.355846  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:27.355863  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:27.569786  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:27.598774  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:27.840778  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:27.840964  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:28.068660  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:28.170093  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:28.340875  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:28.341040  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:28.570325  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:28.599460  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:28.841411  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:28.841685  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:29.068824  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:29.169578  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:29.339794  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:29.339835  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:29.568706  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:29.598648  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:29.840250  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:29.840245  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:30.068121  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:30.098992  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:30.340586  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:30.340661  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:30.569062  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:30.600247  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:30.841287  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:30.841758  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:31.068121  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:31.099370  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:31.341472  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:31.341495  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:31.569114  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:31.599606  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:31.840489  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:31.840624  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:32.069063  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:32.099241  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:32.341320  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:32.341334  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:32.568537  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:32.600117  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:32.840788  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:32.840968  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:33.069406  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:33.099501  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:33.341156  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:33.341201  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:33.567803  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:33.598748  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:33.844421  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:33.844439  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:34.068417  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:34.099870  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:34.340175  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:34.340217  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:34.568449  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:34.599539  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:34.840453  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:34.840516  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:35.068263  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:35.099487  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:35.341115  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:35.341340  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:35.567965  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:35.598959  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:35.840564  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:35.840738  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:36.068717  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:36.099529  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:36.340587  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:36.340592  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:36.568620  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:36.599734  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:36.841124  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:36.841258  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:37.068923  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:37.099134  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:56:37.340980  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:37.340978  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:37.569331  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:37.599499  412831 kapi.go:107] duration metric: took 1m11.503979432s to wait for kubernetes.io/minikube-addons=registry ...
	I1202 19:56:37.841288  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:37.841318  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:38.068570  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:38.340263  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:38.340273  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:38.568808  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:38.841479  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:38.841517  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:39.067850  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:39.340450  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:39.340528  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:39.570112  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:39.840869  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:39.841713  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:40.069886  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:40.343121  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:40.344337  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:40.569583  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:40.877178  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:40.877194  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:41.068404  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:41.341412  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:41.341725  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:41.568707  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:41.840345  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:41.840559  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:42.069090  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:42.340982  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:42.341116  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:42.569259  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:42.843354  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:42.843380  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:43.068223  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:43.341388  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:43.341479  412831 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:56:43.568455  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:43.839770  412831 kapi.go:107] duration metric: took 1m17.003256013s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1202 19:56:43.840180  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:44.069742  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:44.343306  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:44.592121  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:44.840891  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:45.069409  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:45.341162  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:45.567927  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:45.840804  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:46.069496  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:46.340441  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:46.568007  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:46.840930  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:47.068408  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:47.341031  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:47.567859  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:47.840129  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:48.067855  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:48.340746  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:48.570182  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:48.840245  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:49.069351  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:56:49.341301  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:49.569384  412831 kapi.go:107] duration metric: took 1m16.504581331s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1202 19:56:49.571417  412831 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-893295 cluster.
	I1202 19:56:49.572750  412831 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1202 19:56:49.574338  412831 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1202 19:56:49.840570  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:50.340445  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:50.873583  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:51.341408  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:51.840861  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:52.341898  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:52.841211  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:53.340469  412831 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:56:53.840945  412831 kapi.go:107] duration metric: took 1m27.00444127s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1202 19:56:53.842506  412831 out.go:179] * Enabled addons: ingress-dns, nvidia-device-plugin, cloud-spanner, default-storageclass, inspektor-gadget, amd-gpu-device-plugin, registry-creds, storage-provisioner, yakd, storage-provisioner-rancher, metrics-server, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1202 19:56:53.843651  412831 addons.go:530] duration metric: took 1m29.155734667s for enable addons: enabled=[ingress-dns nvidia-device-plugin cloud-spanner default-storageclass inspektor-gadget amd-gpu-device-plugin registry-creds storage-provisioner yakd storage-provisioner-rancher metrics-server volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1202 19:56:53.843706  412831 start.go:247] waiting for cluster config update ...
	I1202 19:56:53.843737  412831 start.go:256] writing updated cluster config ...
	I1202 19:56:53.844053  412831 ssh_runner.go:195] Run: rm -f paused
	I1202 19:56:53.848462  412831 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 19:56:53.851766  412831 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9mvmk" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:56:53.856792  412831 pod_ready.go:94] pod "coredns-66bc5c9577-9mvmk" is "Ready"
	I1202 19:56:53.856820  412831 pod_ready.go:86] duration metric: took 5.031488ms for pod "coredns-66bc5c9577-9mvmk" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:56:53.859208  412831 pod_ready.go:83] waiting for pod "etcd-addons-893295" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:56:53.863133  412831 pod_ready.go:94] pod "etcd-addons-893295" is "Ready"
	I1202 19:56:53.863165  412831 pod_ready.go:86] duration metric: took 3.93138ms for pod "etcd-addons-893295" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:56:53.865500  412831 pod_ready.go:83] waiting for pod "kube-apiserver-addons-893295" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:56:53.869547  412831 pod_ready.go:94] pod "kube-apiserver-addons-893295" is "Ready"
	I1202 19:56:53.869575  412831 pod_ready.go:86] duration metric: took 4.044043ms for pod "kube-apiserver-addons-893295" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:56:53.871548  412831 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-893295" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:56:54.252801  412831 pod_ready.go:94] pod "kube-controller-manager-addons-893295" is "Ready"
	I1202 19:56:54.252830  412831 pod_ready.go:86] duration metric: took 381.260599ms for pod "kube-controller-manager-addons-893295" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:56:54.453773  412831 pod_ready.go:83] waiting for pod "kube-proxy-2bxgd" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:56:54.852745  412831 pod_ready.go:94] pod "kube-proxy-2bxgd" is "Ready"
	I1202 19:56:54.852783  412831 pod_ready.go:86] duration metric: took 398.979558ms for pod "kube-proxy-2bxgd" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:56:55.056082  412831 pod_ready.go:83] waiting for pod "kube-scheduler-addons-893295" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:56:55.452727  412831 pod_ready.go:94] pod "kube-scheduler-addons-893295" is "Ready"
	I1202 19:56:55.452763  412831 pod_ready.go:86] duration metric: took 396.644943ms for pod "kube-scheduler-addons-893295" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:56:55.452778  412831 pod_ready.go:40] duration metric: took 1.604275769s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 19:56:55.497587  412831 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 19:56:55.500390  412831 out.go:179] * Done! kubectl is now configured to use "addons-893295" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 19:56:52 addons-893295 crio[767]: time="2025-12-02T19:56:52.785700666Z" level=info msg="Starting container: 72a3a94a8615446f6a8a6edf8cab89a31462a9125890a07caa7b5c08f54ee5d4" id=88879f7f-9f99-4189-b899-efa8437c39be name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 19:56:52 addons-893295 crio[767]: time="2025-12-02T19:56:52.789056234Z" level=info msg="Started container" PID=6062 containerID=72a3a94a8615446f6a8a6edf8cab89a31462a9125890a07caa7b5c08f54ee5d4 description=kube-system/csi-hostpathplugin-6h8dt/csi-snapshotter id=88879f7f-9f99-4189-b899-efa8437c39be name=/runtime.v1.RuntimeService/StartContainer sandboxID=bfc8afb7675846e4956370559f5638c10eadcda2bd66bd58e9399cf0376bd248
	Dec 02 19:56:56 addons-893295 crio[767]: time="2025-12-02T19:56:56.342789168Z" level=info msg="Running pod sandbox: default/busybox/POD" id=d8e570ee-77c8-4771-947f-0049fac0ec7f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 19:56:56 addons-893295 crio[767]: time="2025-12-02T19:56:56.342871169Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:56:56 addons-893295 crio[767]: time="2025-12-02T19:56:56.35043499Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bd4ce7d269bc5006c94c7168913625fe17a1043ea674ab4666ffc5344c27be3e UID:76f9d798-f2b6-4d6f-9f6c-3fba90dc0c01 NetNS:/var/run/netns/4c37975a-bb5e-4b91-aa7e-a2b565cc0a19 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000172020}] Aliases:map[]}"
	Dec 02 19:56:56 addons-893295 crio[767]: time="2025-12-02T19:56:56.350466727Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 02 19:56:56 addons-893295 crio[767]: time="2025-12-02T19:56:56.360678629Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bd4ce7d269bc5006c94c7168913625fe17a1043ea674ab4666ffc5344c27be3e UID:76f9d798-f2b6-4d6f-9f6c-3fba90dc0c01 NetNS:/var/run/netns/4c37975a-bb5e-4b91-aa7e-a2b565cc0a19 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000172020}] Aliases:map[]}"
	Dec 02 19:56:56 addons-893295 crio[767]: time="2025-12-02T19:56:56.360823174Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 02 19:56:56 addons-893295 crio[767]: time="2025-12-02T19:56:56.361746677Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 19:56:56 addons-893295 crio[767]: time="2025-12-02T19:56:56.362568897Z" level=info msg="Ran pod sandbox bd4ce7d269bc5006c94c7168913625fe17a1043ea674ab4666ffc5344c27be3e with infra container: default/busybox/POD" id=d8e570ee-77c8-4771-947f-0049fac0ec7f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 19:56:56 addons-893295 crio[767]: time="2025-12-02T19:56:56.3638497Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=70d69c96-cbff-49f7-bb1d-fff801a1b605 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:56:56 addons-893295 crio[767]: time="2025-12-02T19:56:56.363996807Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=70d69c96-cbff-49f7-bb1d-fff801a1b605 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:56:56 addons-893295 crio[767]: time="2025-12-02T19:56:56.364040173Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=70d69c96-cbff-49f7-bb1d-fff801a1b605 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:56:56 addons-893295 crio[767]: time="2025-12-02T19:56:56.364599822Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3eba8260-6233-494c-b515-69bb3d5f3960 name=/runtime.v1.ImageService/PullImage
	Dec 02 19:56:56 addons-893295 crio[767]: time="2025-12-02T19:56:56.366177447Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 02 19:56:58 addons-893295 crio[767]: time="2025-12-02T19:56:58.267147119Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=3eba8260-6233-494c-b515-69bb3d5f3960 name=/runtime.v1.ImageService/PullImage
	Dec 02 19:56:58 addons-893295 crio[767]: time="2025-12-02T19:56:58.267863107Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2898cbb3-9c89-48a0-a4d3-57f5a9f23ef5 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:56:58 addons-893295 crio[767]: time="2025-12-02T19:56:58.269535498Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cfa64495-1bb5-4b94-9c52-2054c7a7ed49 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:56:58 addons-893295 crio[767]: time="2025-12-02T19:56:58.273278518Z" level=info msg="Creating container: default/busybox/busybox" id=32eb13a4-beca-4a5e-9d03-9912f7b6ed7f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 19:56:58 addons-893295 crio[767]: time="2025-12-02T19:56:58.273418663Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:56:58 addons-893295 crio[767]: time="2025-12-02T19:56:58.280784104Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:56:58 addons-893295 crio[767]: time="2025-12-02T19:56:58.281239584Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:56:58 addons-893295 crio[767]: time="2025-12-02T19:56:58.317136456Z" level=info msg="Created container 4043884cb56dd9bd829a448b2d14dd2fdf9184af8185cfe5fb411dac6586992e: default/busybox/busybox" id=32eb13a4-beca-4a5e-9d03-9912f7b6ed7f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 19:56:58 addons-893295 crio[767]: time="2025-12-02T19:56:58.317955279Z" level=info msg="Starting container: 4043884cb56dd9bd829a448b2d14dd2fdf9184af8185cfe5fb411dac6586992e" id=f070084d-cc86-4563-9228-d2ad36894383 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 19:56:58 addons-893295 crio[767]: time="2025-12-02T19:56:58.320272011Z" level=info msg="Started container" PID=6184 containerID=4043884cb56dd9bd829a448b2d14dd2fdf9184af8185cfe5fb411dac6586992e description=default/busybox/busybox id=f070084d-cc86-4563-9228-d2ad36894383 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bd4ce7d269bc5006c94c7168913625fe17a1043ea674ab4666ffc5344c27be3e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	4043884cb56dd       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   bd4ce7d269bc5       busybox                                    default
	72a3a94a86154       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          14 seconds ago       Running             csi-snapshotter                          0                   bfc8afb767584       csi-hostpathplugin-6h8dt                   kube-system
	3fc3b9c2bb546       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          15 seconds ago       Running             csi-provisioner                          0                   bfc8afb767584       csi-hostpathplugin-6h8dt                   kube-system
	23592b1014e08       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            16 seconds ago       Running             liveness-probe                           0                   bfc8afb767584       csi-hostpathplugin-6h8dt                   kube-system
	4873f6a4745b9       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           17 seconds ago       Running             hostpath                                 0                   bfc8afb767584       csi-hostpathplugin-6h8dt                   kube-system
	a46f3e2f5f2db       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 17 seconds ago       Running             gcp-auth                                 0                   3ad1bcb052644       gcp-auth-78565c9fb4-2jfqm                  gcp-auth
	69202c0144e36       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                20 seconds ago       Running             node-driver-registrar                    0                   bfc8afb767584       csi-hostpathplugin-6h8dt                   kube-system
	25661eee6e26e       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            21 seconds ago       Running             gadget                                   0                   e06c27e61f63d       gadget-ps8xn                               gadget
	1206c9b45b619       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             23 seconds ago       Running             controller                               0                   3f3f6db8899f8       ingress-nginx-controller-6c8bf45fb-sjqdl   ingress-nginx
	e272a50ae70ce       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      27 seconds ago       Running             volume-snapshot-controller               0                   3d6e29dddc59f       snapshot-controller-7d9fbc56b8-57ls2       kube-system
	343bfc0b495be       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      27 seconds ago       Running             volume-snapshot-controller               0                   620bab088befd       snapshot-controller-7d9fbc56b8-kwz4l       kube-system
	c935f2bdad559       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              28 seconds ago       Running             csi-resizer                              0                   a18a7a7a8a3df       csi-hostpath-resizer-0                     kube-system
	2021a9af4b97c       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              29 seconds ago       Running             registry-proxy                           0                   fda091e962415       registry-proxy-stnrw                       kube-system
	7d3c2329b0b0c       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     32 seconds ago       Running             amd-gpu-device-plugin                    0                   ba85d7878b417       amd-gpu-device-plugin-nklpz                kube-system
	33d9c5ffbca0f       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   34 seconds ago       Running             csi-external-health-monitor-controller   0                   bfc8afb767584       csi-hostpathplugin-6h8dt                   kube-system
	c59167a3c785b       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     35 seconds ago       Running             nvidia-device-plugin-ctr                 0                   c2625225a9c92       nvidia-device-plugin-daemonset-bkjsl       kube-system
	22d14c28c8779       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             37 seconds ago       Exited              patch                                    2                   22789b8159171       ingress-nginx-admission-patch-szllz        ingress-nginx
	ac11c167ef066       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   38 seconds ago       Exited              patch                                    0                   5ccb93128ec4d       gcp-auth-certs-patch-m58jp                 gcp-auth
	cb9cd75fae78a       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              39 seconds ago       Running             yakd                                     0                   bc83c123410f9       yakd-dashboard-5ff678cb9-vvw4f             yakd-dashboard
	db634b271f302       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   42 seconds ago       Exited              create                                   0                   b610effb1bfe5       gcp-auth-certs-create-xdj5b                gcp-auth
	a852aad52763b       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               42 seconds ago       Running             cloud-spanner-emulator                   0                   daa7ab6489f14       cloud-spanner-emulator-5bdddb765-jf7fb     default
	1d0670321bc4a       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             47 seconds ago       Running             csi-attacher                             0                   71fd3c5475d50       csi-hostpath-attacher-0                    kube-system
	b0b7cf0d49211       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   48 seconds ago       Exited              create                                   0                   18b90e176390f       ingress-nginx-admission-create-bbp4j       ingress-nginx
	9012f9d6215d1       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           49 seconds ago       Running             registry                                 0                   57f5ca7658334       registry-6b586f9694-86wz6                  kube-system
	c2442e5b2ee0f       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             51 seconds ago       Running             local-path-provisioner                   0                   3611fa6ce6525       local-path-provisioner-648f6765c9-pjsp5    local-path-storage
	457ec4512e89c       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               53 seconds ago       Running             minikube-ingress-dns                     0                   8f364ab437b38       kube-ingress-dns-minikube                  kube-system
	91253d86ed19b       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        59 seconds ago       Running             metrics-server                           0                   d06f510aed173       metrics-server-85b7d694d7-fbhzv            kube-system
	1a4586dbac8e8       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   5455c2b2c4e2c       coredns-66bc5c9577-9mvmk                   kube-system
	548b1d008679f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   4fe2877403abb       storage-provisioner                        kube-system
	92d33e649bb3a       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             About a minute ago   Running             kube-proxy                               0                   b420dc7f912c9       kube-proxy-2bxgd                           kube-system
	36e4834af5630       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   1cb45e8f8d976       kindnet-bphsd                              kube-system
	1053c12fee90a       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             About a minute ago   Running             kube-controller-manager                  0                   2d840971538c4       kube-controller-manager-addons-893295      kube-system
	87e76e15e8595       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             About a minute ago   Running             etcd                                     0                   b57d6fa8076e3       etcd-addons-893295                         kube-system
	54de7a8ca3420       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             About a minute ago   Running             kube-scheduler                           0                   fdfc3db85baa5       kube-scheduler-addons-893295               kube-system
	64bbafcaa8986       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             About a minute ago   Running             kube-apiserver                           0                   015a7594dd1f5       kube-apiserver-addons-893295               kube-system
	
	
	==> coredns [1a4586dbac8e8d1828435d72cdf3947bd1869e463e0102cc7b6664ebbeddeacf] <==
	[INFO] 10.244.0.13:57176 - 28607 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00022255s
	[INFO] 10.244.0.13:52548 - 2283 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000129383s
	[INFO] 10.244.0.13:52548 - 1978 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000130298s
	[INFO] 10.244.0.13:35670 - 2233 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000105619s
	[INFO] 10.244.0.13:35670 - 2467 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000150132s
	[INFO] 10.244.0.13:49194 - 13472 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000099595s
	[INFO] 10.244.0.13:49194 - 13812 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000175689s
	[INFO] 10.244.0.13:45893 - 36974 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000099762s
	[INFO] 10.244.0.13:45893 - 37270 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000172164s
	[INFO] 10.244.0.13:50448 - 38962 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000135557s
	[INFO] 10.244.0.13:50448 - 38767 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000146296s
	[INFO] 10.244.0.22:51724 - 54414 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000211452s
	[INFO] 10.244.0.22:54292 - 62075 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000269518s
	[INFO] 10.244.0.22:38553 - 4745 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000159126s
	[INFO] 10.244.0.22:56499 - 11084 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000150236s
	[INFO] 10.244.0.22:36554 - 47091 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000111875s
	[INFO] 10.244.0.22:48819 - 33243 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000150311s
	[INFO] 10.244.0.22:58095 - 23803 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.005135561s
	[INFO] 10.244.0.22:46760 - 30608 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.006942093s
	[INFO] 10.244.0.22:57552 - 45957 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004772385s
	[INFO] 10.244.0.22:59923 - 41584 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004980206s
	[INFO] 10.244.0.22:42005 - 63360 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005137142s
	[INFO] 10.244.0.22:41974 - 23037 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005781662s
	[INFO] 10.244.0.22:52086 - 14928 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001165022s
	[INFO] 10.244.0.22:48586 - 59113 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.001357076s
	
	
	==> describe nodes <==
	Name:               addons-893295
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-893295
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=addons-893295
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T19_55_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-893295
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-893295"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 19:55:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-893295
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 19:57:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 19:56:50 +0000   Tue, 02 Dec 2025 19:55:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 19:56:50 +0000   Tue, 02 Dec 2025 19:55:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 19:56:50 +0000   Tue, 02 Dec 2025 19:55:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 19:56:50 +0000   Tue, 02 Dec 2025 19:56:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-893295
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                95635df6-c4bf-4028-a5ca-f3eeb7819f23
	  Boot ID:                    9dd5456f-e394-4b4b-9458-48d26faf507e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-5bdddb765-jf7fb      0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  gadget                      gadget-ps8xn                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  gcp-auth                    gcp-auth-78565c9fb4-2jfqm                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-sjqdl    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         101s
	  kube-system                 amd-gpu-device-plugin-nklpz                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 coredns-66bc5c9577-9mvmk                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 csi-hostpathplugin-6h8dt                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 etcd-addons-893295                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         108s
	  kube-system                 kindnet-bphsd                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-addons-893295                250m (3%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-controller-manager-addons-893295       200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-proxy-2bxgd                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-addons-893295                100m (1%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 metrics-server-85b7d694d7-fbhzv             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         102s
	  kube-system                 nvidia-device-plugin-daemonset-bkjsl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 registry-6b586f9694-86wz6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 registry-creds-764b6fb674-qwrlk             0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 registry-proxy-stnrw                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 snapshot-controller-7d9fbc56b8-57ls2        0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 snapshot-controller-7d9fbc56b8-kwz4l        0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  local-path-storage          local-path-provisioner-648f6765c9-pjsp5     0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-vvw4f              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     101s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 100s  kube-proxy       
	  Normal  Starting                 109s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  108s  kubelet          Node addons-893295 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s  kubelet          Node addons-893295 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s  kubelet          Node addons-893295 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           104s  node-controller  Node addons-893295 event: Registered Node addons-893295 in Controller
	  Normal  NodeReady                61s   kubelet          Node addons-893295 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e 25 8b 13 76 b0 08 06
	[  +0.000463] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa 78 af da 97 ad 08 06
	[ +21.495825] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 72 3b 00 e5 db 1b 08 06
	[  +0.039777] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 9b 3f d0 0c 1e 08 06
	[ +13.910569] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e 21 63 6f 6b 91 08 06
	[  +0.105653] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 95 9a 02 fc fb 08 06
	[  +3.562966] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000026] ll header: 00000000: ff ff ff ff ff ff 8e 1b 00 58 43 71 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c6 9b 3f d0 0c 1e 08 06
	[Dec 2 19:51] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 a1 97 79 8c ee 08 06
	[ +18.287827] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 26 29 de c0 df 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 a1 97 79 8c ee 08 06
	[ +11.254611] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e f4 c0 f2 56 fb 08 06
	[  +0.000355] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 95 9a 02 fc fb 08 06
	
	
	==> etcd [87e76e15e8595d052d66a4d86ee8b1416a8f60a669646ecbdd55cf8343b8db42] <==
	{"level":"warn","ts":"2025-12-02T19:55:15.871704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T19:55:15.878822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T19:55:15.886403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T19:55:15.904357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T19:55:15.912703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T19:55:15.920479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T19:55:15.971242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T19:55:27.395829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T19:55:53.376273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T19:55:53.383344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T19:55:53.402502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T19:55:53.409631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60420","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T19:56:18.329346Z","caller":"traceutil/trace.go:172","msg":"trace[1739899132] transaction","detail":"{read_only:false; response_revision:1003; number_of_response:1; }","duration":"100.270745ms","start":"2025-12-02T19:56:18.229050Z","end":"2025-12-02T19:56:18.329321Z","steps":["trace[1739899132] 'process raft request'  (duration: 100.174567ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T19:56:18.458264Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.84542ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T19:56:18.458371Z","caller":"traceutil/trace.go:172","msg":"trace[212098468] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1003; }","duration":"119.983555ms","start":"2025-12-02T19:56:18.338371Z","end":"2025-12-02T19:56:18.458355Z","steps":["trace[212098468] 'agreement among raft nodes before linearized reading'  (duration: 48.051953ms)","trace[212098468] 'range keys from in-memory index tree'  (duration: 71.748553ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T19:56:18.458447Z","caller":"traceutil/trace.go:172","msg":"trace[2027857323] transaction","detail":"{read_only:false; response_revision:1004; number_of_response:1; }","duration":"124.949865ms","start":"2025-12-02T19:56:18.333472Z","end":"2025-12-02T19:56:18.458421Z","steps":["trace[2027857323] 'process raft request'  (duration: 52.984946ms)","trace[2027857323] 'compare'  (duration: 71.783925ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T19:56:18.458478Z","caller":"traceutil/trace.go:172","msg":"trace[177507218] transaction","detail":"{read_only:false; response_revision:1006; number_of_response:1; }","duration":"124.585103ms","start":"2025-12-02T19:56:18.333882Z","end":"2025-12-02T19:56:18.458467Z","steps":["trace[177507218] 'process raft request'  (duration: 124.54011ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T19:56:18.458497Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.095405ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T19:56:18.458537Z","caller":"traceutil/trace.go:172","msg":"trace[1486270404] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1006; }","duration":"120.140912ms","start":"2025-12-02T19:56:18.338385Z","end":"2025-12-02T19:56:18.458526Z","steps":["trace[1486270404] 'agreement among raft nodes before linearized reading'  (duration: 120.054031ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:56:18.458509Z","caller":"traceutil/trace.go:172","msg":"trace[1377585880] transaction","detail":"{read_only:false; response_revision:1005; number_of_response:1; }","duration":"125.023531ms","start":"2025-12-02T19:56:18.333472Z","end":"2025-12-02T19:56:18.458495Z","steps":["trace[1377585880] 'process raft request'  (duration: 124.893619ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:56:18.497636Z","caller":"traceutil/trace.go:172","msg":"trace[2023940001] transaction","detail":"{read_only:false; response_revision:1007; number_of_response:1; }","duration":"107.519025ms","start":"2025-12-02T19:56:18.390102Z","end":"2025-12-02T19:56:18.497621Z","steps":["trace[2023940001] 'process raft request'  (duration: 107.424441ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:56:27.505602Z","caller":"traceutil/trace.go:172","msg":"trace[613967848] transaction","detail":"{read_only:false; response_revision:1051; number_of_response:1; }","duration":"147.690619ms","start":"2025-12-02T19:56:27.357890Z","end":"2025-12-02T19:56:27.505580Z","steps":["trace[613967848] 'process raft request'  (duration: 76.657046ms)","trace[613967848] 'compare'  (duration: 70.918231ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T19:56:43.019714Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.499896ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T19:56:43.019795Z","caller":"traceutil/trace.go:172","msg":"trace[459523527] range","detail":"{range_begin:/registry/daemonsets; range_end:; response_count:0; response_revision:1148; }","duration":"112.601042ms","start":"2025-12-02T19:56:42.907179Z","end":"2025-12-02T19:56:43.019780Z","steps":["trace[459523527] 'range keys from in-memory index tree'  (duration: 112.427491ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:56:51.201122Z","caller":"traceutil/trace.go:172","msg":"trace[1378131373] transaction","detail":"{read_only:false; response_revision:1208; number_of_response:1; }","duration":"120.264416ms","start":"2025-12-02T19:56:51.080832Z","end":"2025-12-02T19:56:51.201096Z","steps":["trace[1378131373] 'process raft request'  (duration: 39.579953ms)","trace[1378131373] 'compare'  (duration: 80.384372ms)"],"step_count":2}
	
	
	==> gcp-auth [a46f3e2f5f2db824560ef63faaba0a67cdf308ef0ead014b67f26c5a5f5b3d67] <==
	2025/12/02 19:56:49 GCP Auth Webhook started!
	2025/12/02 19:56:55 Ready to marshal response ...
	2025/12/02 19:56:55 Ready to write response ...
	2025/12/02 19:56:56 Ready to marshal response ...
	2025/12/02 19:56:56 Ready to write response ...
	2025/12/02 19:56:56 Ready to marshal response ...
	2025/12/02 19:56:56 Ready to write response ...
	
	
	==> kernel <==
	 19:57:07 up  1:39,  0 user,  load average: 0.98, 1.84, 2.04
	Linux addons-893295 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [36e4834af56306fb62768ad3dbe8f24dbfa293561c0c41bee1c6d418ce06f454] <==
	I1202 19:55:25.816557       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 19:55:25.816586       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 19:55:25.816601       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 19:55:25.816737       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1202 19:55:55.817378       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1202 19:55:55.817378       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1202 19:55:55.817408       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1202 19:55:55.818620       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1202 19:55:57.218199       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 19:55:57.218243       1 metrics.go:72] Registering metrics
	I1202 19:55:57.218361       1 controller.go:711] "Syncing nftables rules"
	I1202 19:56:05.823699       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:56:05.823767       1 main.go:301] handling current node
	I1202 19:56:15.816354       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:56:15.816407       1 main.go:301] handling current node
	I1202 19:56:25.816333       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:56:25.816507       1 main.go:301] handling current node
	I1202 19:56:35.816831       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:56:35.816868       1 main.go:301] handling current node
	I1202 19:56:45.816400       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:56:45.816441       1 main.go:301] handling current node
	I1202 19:56:55.816402       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:56:55.816438       1 main.go:301] handling current node
	I1202 19:57:05.821202       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:57:05.821248       1 main.go:301] handling current node
	
	
	==> kube-apiserver [64bbafcaa8986f6e93390db3e1aa160fe3cecdd54cdd91e940adc5db87fefb45] <==
	I1202 19:55:32.993117       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.111.102.153"}
	W1202 19:55:53.376176       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1202 19:55:53.383284       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1202 19:55:53.402410       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1202 19:55:53.409525       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1202 19:56:06.124380       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.102.153:443: connect: connection refused
	E1202 19:56:06.125962       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.102.153:443: connect: connection refused" logger="UnhandledError"
	W1202 19:56:06.125291       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.102.153:443: connect: connection refused
	E1202 19:56:06.126055       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.102.153:443: connect: connection refused" logger="UnhandledError"
	W1202 19:56:06.150918       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.102.153:443: connect: connection refused
	E1202 19:56:06.150959       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.102.153:443: connect: connection refused" logger="UnhandledError"
	W1202 19:56:06.151059       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.102.153:443: connect: connection refused
	E1202 19:56:06.151121       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.102.153:443: connect: connection refused" logger="UnhandledError"
	E1202 19:56:09.220298       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.25.184:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.25.184:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.25.184:443: connect: connection refused" logger="UnhandledError"
	W1202 19:56:09.220311       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 19:56:09.220460       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1202 19:56:09.221088       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.25.184:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.25.184:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.25.184:443: connect: connection refused" logger="UnhandledError"
	E1202 19:56:09.226487       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.25.184:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.25.184:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.25.184:443: connect: connection refused" logger="UnhandledError"
	E1202 19:56:09.247916       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.25.184:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.25.184:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.25.184:443: connect: connection refused" logger="UnhandledError"
	I1202 19:56:09.341377       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1202 19:57:05.188156       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51502: use of closed network connection
	E1202 19:57:05.342315       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51540: use of closed network connection
	
	
	==> kube-controller-manager [1053c12fee90a817e22976e0dc30541fb27c049e02c7c5af353833a13b30e982] <==
	I1202 19:55:23.360550       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 19:55:23.361805       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1202 19:55:23.363897       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1202 19:55:23.363934       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1202 19:55:23.363977       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1202 19:55:23.364006       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1202 19:55:23.364016       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1202 19:55:23.364023       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1202 19:55:23.364149       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 19:55:23.365395       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 19:55:23.370997       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-893295" podCIDRs=["10.244.0.0/24"]
	I1202 19:55:23.374126       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1202 19:55:23.384971       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 19:55:23.389179       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 19:55:23.389206       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 19:55:23.389215       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1202 19:55:25.987978       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1202 19:55:53.369748       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 19:55:53.369951       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1202 19:55:53.370005       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1202 19:55:53.392552       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1202 19:55:53.396673       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1202 19:55:53.471085       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 19:55:53.497428       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 19:56:08.366207       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [92d33e649bb3a4d1fbfddebd2c29786df9d0f9bbe8c4df37c931d3fb4cae82a7] <==
	I1202 19:55:25.649054       1 server_linux.go:53] "Using iptables proxy"
	I1202 19:55:25.882566       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 19:55:25.988575       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 19:55:25.988673       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 19:55:25.988802       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 19:55:26.120262       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 19:55:26.120418       1 server_linux.go:132] "Using iptables Proxier"
	I1202 19:55:26.128948       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 19:55:26.129924       1 server.go:527] "Version info" version="v1.34.2"
	I1202 19:55:26.130196       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 19:55:26.160522       1 config.go:200] "Starting service config controller"
	I1202 19:55:26.160612       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 19:55:26.160756       1 config.go:106] "Starting endpoint slice config controller"
	I1202 19:55:26.160918       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 19:55:26.161001       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 19:55:26.161028       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 19:55:26.162132       1 config.go:309] "Starting node config controller"
	I1202 19:55:26.162200       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 19:55:26.260801       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 19:55:26.261946       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 19:55:26.261967       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 19:55:26.263481       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [54de7a8ca3420358423254cbf3d9a5a5e7140b7f46e22139375e40856000099c] <==
	E1202 19:55:16.388944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 19:55:16.389121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 19:55:16.389105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 19:55:16.389311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 19:55:16.389361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 19:55:16.389377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 19:55:16.389499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 19:55:16.389503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 19:55:16.389579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 19:55:16.389620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 19:55:16.389632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 19:55:16.389667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 19:55:16.389673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 19:55:17.267586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 19:55:17.268580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 19:55:17.282331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 19:55:17.293220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 19:55:17.321592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 19:55:17.351793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 19:55:17.360020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1202 19:55:17.444948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 19:55:17.487657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 19:55:17.501893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 19:55:17.541246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1202 19:55:20.386177       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 19:56:33 addons-893295 kubelet[1277]: I1202 19:56:33.304304    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-bkjsl" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 19:56:35 addons-893295 kubelet[1277]: I1202 19:56:35.312929    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-nklpz" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 19:56:35 addons-893295 kubelet[1277]: I1202 19:56:35.323434    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/amd-gpu-device-plugin-nklpz" podStartSLOduration=1.555293975 podStartE2EDuration="29.323412473s" podCreationTimestamp="2025-12-02 19:56:06 +0000 UTC" firstStartedPulling="2025-12-02 19:56:06.580371606 +0000 UTC m=+47.656320582" lastFinishedPulling="2025-12-02 19:56:34.348490096 +0000 UTC m=+75.424439080" observedRunningTime="2025-12-02 19:56:35.32338179 +0000 UTC m=+76.399330781" watchObservedRunningTime="2025-12-02 19:56:35.323412473 +0000 UTC m=+76.399361462"
	Dec 02 19:56:35 addons-893295 kubelet[1277]: I1202 19:56:35.323622    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-bkjsl" podStartSLOduration=4.039480339 podStartE2EDuration="29.323611435s" podCreationTimestamp="2025-12-02 19:56:06 +0000 UTC" firstStartedPulling="2025-12-02 19:56:06.569861394 +0000 UTC m=+47.645810363" lastFinishedPulling="2025-12-02 19:56:31.853992488 +0000 UTC m=+72.929941459" observedRunningTime="2025-12-02 19:56:32.310664633 +0000 UTC m=+73.386613624" watchObservedRunningTime="2025-12-02 19:56:35.323611435 +0000 UTC m=+76.399560423"
	Dec 02 19:56:36 addons-893295 kubelet[1277]: I1202 19:56:36.316348    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-nklpz" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 19:56:37 addons-893295 kubelet[1277]: I1202 19:56:37.321192    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-stnrw" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 19:56:38 addons-893295 kubelet[1277]: E1202 19:56:38.074802    1277 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 02 19:56:38 addons-893295 kubelet[1277]: E1202 19:56:38.074891    1277 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/242299e3-e588-4f0a-890d-da4c53cafcce-gcr-creds podName:242299e3-e588-4f0a-890d-da4c53cafcce nodeName:}" failed. No retries permitted until 2025-12-02 19:57:10.074875524 +0000 UTC m=+111.150824504 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/242299e3-e588-4f0a-890d-da4c53cafcce-gcr-creds") pod "registry-creds-764b6fb674-qwrlk" (UID: "242299e3-e588-4f0a-890d-da4c53cafcce") : secret "registry-creds-gcr" not found
	Dec 02 19:56:38 addons-893295 kubelet[1277]: I1202 19:56:38.328558    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-stnrw" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 19:56:38 addons-893295 kubelet[1277]: I1202 19:56:38.339788    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-stnrw" podStartSLOduration=1.743637375 podStartE2EDuration="32.339761796s" podCreationTimestamp="2025-12-02 19:56:06 +0000 UTC" firstStartedPulling="2025-12-02 19:56:06.651821161 +0000 UTC m=+47.727770133" lastFinishedPulling="2025-12-02 19:56:37.247945543 +0000 UTC m=+78.323894554" observedRunningTime="2025-12-02 19:56:37.33210798 +0000 UTC m=+78.408056969" watchObservedRunningTime="2025-12-02 19:56:38.339761796 +0000 UTC m=+79.415710785"
	Dec 02 19:56:38 addons-893295 kubelet[1277]: I1202 19:56:38.340096    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpath-resizer-0" podStartSLOduration=41.087986352 podStartE2EDuration="1m12.340062571s" podCreationTimestamp="2025-12-02 19:55:26 +0000 UTC" firstStartedPulling="2025-12-02 19:56:06.844106524 +0000 UTC m=+47.920055499" lastFinishedPulling="2025-12-02 19:56:38.096182729 +0000 UTC m=+79.172131718" observedRunningTime="2025-12-02 19:56:38.338976753 +0000 UTC m=+79.414925742" watchObservedRunningTime="2025-12-02 19:56:38.340062571 +0000 UTC m=+79.416011559"
	Dec 02 19:56:39 addons-893295 kubelet[1277]: I1202 19:56:39.345059    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/snapshot-controller-7d9fbc56b8-kwz4l" podStartSLOduration=41.423062881 podStartE2EDuration="1m13.345035372s" podCreationTimestamp="2025-12-02 19:55:26 +0000 UTC" firstStartedPulling="2025-12-02 19:56:07.053575542 +0000 UTC m=+48.129524527" lastFinishedPulling="2025-12-02 19:56:38.975548037 +0000 UTC m=+80.051497018" observedRunningTime="2025-12-02 19:56:39.344346891 +0000 UTC m=+80.420295879" watchObservedRunningTime="2025-12-02 19:56:39.345035372 +0000 UTC m=+80.420984361"
	Dec 02 19:56:39 addons-893295 kubelet[1277]: I1202 19:56:39.353279    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/snapshot-controller-7d9fbc56b8-57ls2" podStartSLOduration=41.201885637 podStartE2EDuration="1m13.353254784s" podCreationTimestamp="2025-12-02 19:55:26 +0000 UTC" firstStartedPulling="2025-12-02 19:56:07.054336309 +0000 UTC m=+48.130285277" lastFinishedPulling="2025-12-02 19:56:39.205705453 +0000 UTC m=+80.281654424" observedRunningTime="2025-12-02 19:56:39.352914938 +0000 UTC m=+80.428863927" watchObservedRunningTime="2025-12-02 19:56:39.353254784 +0000 UTC m=+80.429203775"
	Dec 02 19:56:43 addons-893295 kubelet[1277]: I1202 19:56:43.367713    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-sjqdl" podStartSLOduration=56.398921839 podStartE2EDuration="1m17.367689391s" podCreationTimestamp="2025-12-02 19:55:26 +0000 UTC" firstStartedPulling="2025-12-02 19:56:22.112878593 +0000 UTC m=+63.188827561" lastFinishedPulling="2025-12-02 19:56:43.081646124 +0000 UTC m=+84.157595113" observedRunningTime="2025-12-02 19:56:43.366270521 +0000 UTC m=+84.442219511" watchObservedRunningTime="2025-12-02 19:56:43.367689391 +0000 UTC m=+84.443638381"
	Dec 02 19:56:46 addons-893295 kubelet[1277]: I1202 19:56:46.389015    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-ps8xn" podStartSLOduration=65.234032927 podStartE2EDuration="1m20.388988338s" podCreationTimestamp="2025-12-02 19:55:26 +0000 UTC" firstStartedPulling="2025-12-02 19:56:30.780730659 +0000 UTC m=+71.856679632" lastFinishedPulling="2025-12-02 19:56:45.935686063 +0000 UTC m=+87.011635043" observedRunningTime="2025-12-02 19:56:46.388676658 +0000 UTC m=+87.464625647" watchObservedRunningTime="2025-12-02 19:56:46.388988338 +0000 UTC m=+87.464937327"
	Dec 02 19:56:49 addons-893295 kubelet[1277]: I1202 19:56:49.395655    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-2jfqm" podStartSLOduration=65.551118644 podStartE2EDuration="1m16.395629484s" podCreationTimestamp="2025-12-02 19:55:33 +0000 UTC" firstStartedPulling="2025-12-02 19:56:38.31271193 +0000 UTC m=+79.388660900" lastFinishedPulling="2025-12-02 19:56:49.157222769 +0000 UTC m=+90.233171740" observedRunningTime="2025-12-02 19:56:49.394655462 +0000 UTC m=+90.470604452" watchObservedRunningTime="2025-12-02 19:56:49.395629484 +0000 UTC m=+90.471578475"
	Dec 02 19:56:51 addons-893295 kubelet[1277]: I1202 19:56:51.067823    1277 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 02 19:56:51 addons-893295 kubelet[1277]: I1202 19:56:51.067880    1277 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 02 19:56:53 addons-893295 kubelet[1277]: I1202 19:56:53.430303    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-6h8dt" podStartSLOduration=1.27048099 podStartE2EDuration="47.430278535s" podCreationTimestamp="2025-12-02 19:56:06 +0000 UTC" firstStartedPulling="2025-12-02 19:56:06.578719252 +0000 UTC m=+47.654668226" lastFinishedPulling="2025-12-02 19:56:52.738516802 +0000 UTC m=+93.814465771" observedRunningTime="2025-12-02 19:56:53.429999807 +0000 UTC m=+94.505948806" watchObservedRunningTime="2025-12-02 19:56:53.430278535 +0000 UTC m=+94.506227525"
	Dec 02 19:56:56 addons-893295 kubelet[1277]: I1202 19:56:56.221887    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/76f9d798-f2b6-4d6f-9f6c-3fba90dc0c01-gcp-creds\") pod \"busybox\" (UID: \"76f9d798-f2b6-4d6f-9f6c-3fba90dc0c01\") " pod="default/busybox"
	Dec 02 19:56:56 addons-893295 kubelet[1277]: I1202 19:56:56.221951    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42nmp\" (UniqueName: \"kubernetes.io/projected/76f9d798-f2b6-4d6f-9f6c-3fba90dc0c01-kube-api-access-42nmp\") pod \"busybox\" (UID: \"76f9d798-f2b6-4d6f-9f6c-3fba90dc0c01\") " pod="default/busybox"
	Dec 02 19:56:58 addons-893295 kubelet[1277]: I1202 19:56:58.452088    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.547563468 podStartE2EDuration="2.452049099s" podCreationTimestamp="2025-12-02 19:56:56 +0000 UTC" firstStartedPulling="2025-12-02 19:56:56.364300382 +0000 UTC m=+97.440249350" lastFinishedPulling="2025-12-02 19:56:58.268786013 +0000 UTC m=+99.344734981" observedRunningTime="2025-12-02 19:56:58.451638319 +0000 UTC m=+99.527587308" watchObservedRunningTime="2025-12-02 19:56:58.452049099 +0000 UTC m=+99.527998088"
	Dec 02 19:56:59 addons-893295 kubelet[1277]: I1202 19:56:59.017040    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eba037a1-b926-4b80-bdf4-d6f027cf72a7" path="/var/lib/kubelet/pods/eba037a1-b926-4b80-bdf4-d6f027cf72a7/volumes"
	Dec 02 19:57:01 addons-893295 kubelet[1277]: I1202 19:57:01.016372    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db9d66ab-663a-4da4-acc8-f1126a912866" path="/var/lib/kubelet/pods/db9d66ab-663a-4da4-acc8-f1126a912866/volumes"
	Dec 02 19:57:05 addons-893295 kubelet[1277]: E1202 19:57:05.342251    1277 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:53194->127.0.0.1:39979: write tcp 127.0.0.1:53194->127.0.0.1:39979: write: broken pipe
	
	
	==> storage-provisioner [548b1d008679ffab8ee06c2f360e832860d3a7904cf593b25d31675b7bb892f9] <==
	W1202 19:56:42.842679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:56:44.846143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:56:44.852315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:56:46.855756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:56:46.859878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:56:48.864155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:56:48.870181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:56:50.873377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:56:50.960060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:56:52.963493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:56:52.967658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:56:54.970945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:56:54.976744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:56:56.981534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:56:56.985712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:56:58.988919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:56:58.993574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:57:00.996788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:57:01.000685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:57:03.004471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:57:03.008491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:57:05.011793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:57:05.016692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:57:07.020980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:57:07.024614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-893295 -n addons-893295
helpers_test.go:269: (dbg) Run:  kubectl --context addons-893295 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-bbp4j ingress-nginx-admission-patch-szllz registry-creds-764b6fb674-qwrlk
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-893295 describe pod ingress-nginx-admission-create-bbp4j ingress-nginx-admission-patch-szllz registry-creds-764b6fb674-qwrlk
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-893295 describe pod ingress-nginx-admission-create-bbp4j ingress-nginx-admission-patch-szllz registry-creds-764b6fb674-qwrlk: exit status 1 (62.162436ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-bbp4j" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-szllz" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-qwrlk" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-893295 describe pod ingress-nginx-admission-create-bbp4j ingress-nginx-admission-patch-szllz registry-creds-764b6fb674-qwrlk: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-893295 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-893295 addons disable headlamp --alsologtostderr -v=1: exit status 11 (261.004874ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:57:08.083237  421767 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:57:08.083508  421767 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:57:08.083519  421767 out.go:374] Setting ErrFile to fd 2...
	I1202 19:57:08.083526  421767 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:57:08.083730  421767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 19:57:08.084042  421767 mustload.go:66] Loading cluster: addons-893295
	I1202 19:57:08.084410  421767 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:08.084438  421767 addons.go:622] checking whether the cluster is paused
	I1202 19:57:08.084535  421767 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:08.084560  421767 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:57:08.084974  421767 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:57:08.104188  421767 ssh_runner.go:195] Run: systemctl --version
	I1202 19:57:08.104291  421767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:57:08.124908  421767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:57:08.225496  421767 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:57:08.225590  421767 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:57:08.255903  421767 cri.go:89] found id: "72a3a94a8615446f6a8a6edf8cab89a31462a9125890a07caa7b5c08f54ee5d4"
	I1202 19:57:08.255927  421767 cri.go:89] found id: "3fc3b9c2bb5465a31c0448a05bdfa005e3690110089411631dc7f034b6d8ba5f"
	I1202 19:57:08.255931  421767 cri.go:89] found id: "23592b1014e085ea0e5ab3db08387563e82cae3f3801aefb1a36803352f4b32c"
	I1202 19:57:08.255935  421767 cri.go:89] found id: "4873f6a4745b98e6565829135d48f208fc3b8c8fc38349268058cfe66db69ace"
	I1202 19:57:08.255938  421767 cri.go:89] found id: "69202c0144e36fa98f89f3e4dcc0bb6766cd1a5e7765438a217890a210ccc213"
	I1202 19:57:08.255942  421767 cri.go:89] found id: "e272a50ae70cef4e55de4fc5c4b0afb42c240aef2f0e61c0f58d21f32bb4b1b8"
	I1202 19:57:08.255945  421767 cri.go:89] found id: "343bfc0b495bea2a196f645318c6f732f4aac4d10f89f12fe35398625eac34a6"
	I1202 19:57:08.255948  421767 cri.go:89] found id: "c935f2bdad559803c1b224bb424e2d6a8e3f939cc705debca52e51d3b73805cb"
	I1202 19:57:08.255951  421767 cri.go:89] found id: "2021a9af4b97cf9f19cd51daff4057de8ce4a98c1392ab4618729a6e1fdbe890"
	I1202 19:57:08.255957  421767 cri.go:89] found id: "7d3c2329b0b0c2e623e8d3059a441a596800bfcc5ff55d233343c158bb68d997"
	I1202 19:57:08.255960  421767 cri.go:89] found id: "33d9c5ffbca0f707ad94361bf00ebbc97925e1784dd973ef7bd8245741da9b67"
	I1202 19:57:08.255963  421767 cri.go:89] found id: "c59167a3c785bc464e3e63318df704b0084b4a2a24721b883033175b6f4b533f"
	I1202 19:57:08.255966  421767 cri.go:89] found id: "1d0670321bc4abe2d7954d0d6f908cf4e3863170f2e522b0100392c768577198"
	I1202 19:57:08.255969  421767 cri.go:89] found id: "9012f9d6215d108610b3c6096d8b9fd68c47c3b0a9ba15cab4f13cc9e385d4b9"
	I1202 19:57:08.255972  421767 cri.go:89] found id: "457ec4512e89c116a7c5ba880e93b4b91cf5fc694ff53ccf03533d6e1e36de9b"
	I1202 19:57:08.255977  421767 cri.go:89] found id: "91253d86ed19be0b0e1a31e49336ee85f71ca41d7f491fcc1fd6cd2978993ba0"
	I1202 19:57:08.255980  421767 cri.go:89] found id: "1a4586dbac8e8d1828435d72cdf3947bd1869e463e0102cc7b6664ebbeddeacf"
	I1202 19:57:08.255986  421767 cri.go:89] found id: "548b1d008679ffab8ee06c2f360e832860d3a7904cf593b25d31675b7bb892f9"
	I1202 19:57:08.255991  421767 cri.go:89] found id: "92d33e649bb3a4d1fbfddebd2c29786df9d0f9bbe8c4df37c931d3fb4cae82a7"
	I1202 19:57:08.255995  421767 cri.go:89] found id: "36e4834af56306fb62768ad3dbe8f24dbfa293561c0c41bee1c6d418ce06f454"
	I1202 19:57:08.255998  421767 cri.go:89] found id: "1053c12fee90a817e22976e0dc30541fb27c049e02c7c5af353833a13b30e982"
	I1202 19:57:08.256000  421767 cri.go:89] found id: "87e76e15e8595d052d66a4d86ee8b1416a8f60a669646ecbdd55cf8343b8db42"
	I1202 19:57:08.256003  421767 cri.go:89] found id: "54de7a8ca3420358423254cbf3d9a5a5e7140b7f46e22139375e40856000099c"
	I1202 19:57:08.256006  421767 cri.go:89] found id: "64bbafcaa8986f6e93390db3e1aa160fe3cecdd54cdd91e940adc5db87fefb45"
	I1202 19:57:08.256009  421767 cri.go:89] found id: ""
	I1202 19:57:08.256049  421767 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 19:57:08.272032  421767 out.go:203] 
	W1202 19:57:08.273632  421767 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:57:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:57:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 19:57:08.273656  421767 out.go:285] * 
	* 
	W1202 19:57:08.277677  421767 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:57:08.279336  421767 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-893295 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.68s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-jf7fb" [98060d50-6b22-4509-a261-a178b8cd9bf1] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005678357s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-893295 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-893295 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (257.972193ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:57:28.714016  424262 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:57:28.714332  424262 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:57:28.714344  424262 out.go:374] Setting ErrFile to fd 2...
	I1202 19:57:28.714348  424262 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:57:28.714536  424262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 19:57:28.714782  424262 mustload.go:66] Loading cluster: addons-893295
	I1202 19:57:28.715144  424262 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:28.715173  424262 addons.go:622] checking whether the cluster is paused
	I1202 19:57:28.715254  424262 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:28.715270  424262 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:57:28.715697  424262 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:57:28.733832  424262 ssh_runner.go:195] Run: systemctl --version
	I1202 19:57:28.733886  424262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:57:28.752094  424262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:57:28.852230  424262 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:57:28.852340  424262 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:57:28.882909  424262 cri.go:89] found id: "72a3a94a8615446f6a8a6edf8cab89a31462a9125890a07caa7b5c08f54ee5d4"
	I1202 19:57:28.882931  424262 cri.go:89] found id: "3fc3b9c2bb5465a31c0448a05bdfa005e3690110089411631dc7f034b6d8ba5f"
	I1202 19:57:28.882935  424262 cri.go:89] found id: "23592b1014e085ea0e5ab3db08387563e82cae3f3801aefb1a36803352f4b32c"
	I1202 19:57:28.882938  424262 cri.go:89] found id: "4873f6a4745b98e6565829135d48f208fc3b8c8fc38349268058cfe66db69ace"
	I1202 19:57:28.882941  424262 cri.go:89] found id: "69202c0144e36fa98f89f3e4dcc0bb6766cd1a5e7765438a217890a210ccc213"
	I1202 19:57:28.882946  424262 cri.go:89] found id: "e272a50ae70cef4e55de4fc5c4b0afb42c240aef2f0e61c0f58d21f32bb4b1b8"
	I1202 19:57:28.882949  424262 cri.go:89] found id: "343bfc0b495bea2a196f645318c6f732f4aac4d10f89f12fe35398625eac34a6"
	I1202 19:57:28.882952  424262 cri.go:89] found id: "c935f2bdad559803c1b224bb424e2d6a8e3f939cc705debca52e51d3b73805cb"
	I1202 19:57:28.882955  424262 cri.go:89] found id: "2021a9af4b97cf9f19cd51daff4057de8ce4a98c1392ab4618729a6e1fdbe890"
	I1202 19:57:28.882961  424262 cri.go:89] found id: "7d3c2329b0b0c2e623e8d3059a441a596800bfcc5ff55d233343c158bb68d997"
	I1202 19:57:28.882968  424262 cri.go:89] found id: "33d9c5ffbca0f707ad94361bf00ebbc97925e1784dd973ef7bd8245741da9b67"
	I1202 19:57:28.882971  424262 cri.go:89] found id: "c59167a3c785bc464e3e63318df704b0084b4a2a24721b883033175b6f4b533f"
	I1202 19:57:28.882974  424262 cri.go:89] found id: "1d0670321bc4abe2d7954d0d6f908cf4e3863170f2e522b0100392c768577198"
	I1202 19:57:28.882977  424262 cri.go:89] found id: "9012f9d6215d108610b3c6096d8b9fd68c47c3b0a9ba15cab4f13cc9e385d4b9"
	I1202 19:57:28.882979  424262 cri.go:89] found id: "457ec4512e89c116a7c5ba880e93b4b91cf5fc694ff53ccf03533d6e1e36de9b"
	I1202 19:57:28.882998  424262 cri.go:89] found id: "91253d86ed19be0b0e1a31e49336ee85f71ca41d7f491fcc1fd6cd2978993ba0"
	I1202 19:57:28.883003  424262 cri.go:89] found id: "1a4586dbac8e8d1828435d72cdf3947bd1869e463e0102cc7b6664ebbeddeacf"
	I1202 19:57:28.883008  424262 cri.go:89] found id: "548b1d008679ffab8ee06c2f360e832860d3a7904cf593b25d31675b7bb892f9"
	I1202 19:57:28.883010  424262 cri.go:89] found id: "92d33e649bb3a4d1fbfddebd2c29786df9d0f9bbe8c4df37c931d3fb4cae82a7"
	I1202 19:57:28.883013  424262 cri.go:89] found id: "36e4834af56306fb62768ad3dbe8f24dbfa293561c0c41bee1c6d418ce06f454"
	I1202 19:57:28.883016  424262 cri.go:89] found id: "1053c12fee90a817e22976e0dc30541fb27c049e02c7c5af353833a13b30e982"
	I1202 19:57:28.883019  424262 cri.go:89] found id: "87e76e15e8595d052d66a4d86ee8b1416a8f60a669646ecbdd55cf8343b8db42"
	I1202 19:57:28.883022  424262 cri.go:89] found id: "54de7a8ca3420358423254cbf3d9a5a5e7140b7f46e22139375e40856000099c"
	I1202 19:57:28.883024  424262 cri.go:89] found id: "64bbafcaa8986f6e93390db3e1aa160fe3cecdd54cdd91e940adc5db87fefb45"
	I1202 19:57:28.883030  424262 cri.go:89] found id: ""
	I1202 19:57:28.883090  424262 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 19:57:28.897650  424262 out.go:203] 
	W1202 19:57:28.898967  424262 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:57:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:57:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 19:57:28.898993  424262 out.go:285] * 
	* 
	W1202 19:57:28.903044  424262 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:57:28.904541  424262 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-893295 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.27s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-893295 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-893295 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-893295 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [5057d06b-5c72-4e55-9fb0-7367905b5e6b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [5057d06b-5c72-4e55-9fb0-7367905b5e6b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [5057d06b-5c72-4e55-9fb0-7367905b5e6b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.00358795s
addons_test.go:967: (dbg) Run:  kubectl --context addons-893295 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-893295 ssh "cat /opt/local-path-provisioner/pvc-686522b0-186d-48bc-b51e-e42cc4a9a58b_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-893295 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-893295 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-893295 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-893295 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (305.194965ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:57:22.923300  423409 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:57:22.923543  423409 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:57:22.923551  423409 out.go:374] Setting ErrFile to fd 2...
	I1202 19:57:22.923555  423409 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:57:22.923734  423409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 19:57:22.923997  423409 mustload.go:66] Loading cluster: addons-893295
	I1202 19:57:22.924387  423409 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:22.924418  423409 addons.go:622] checking whether the cluster is paused
	I1202 19:57:22.924502  423409 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:22.924518  423409 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:57:22.925323  423409 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:57:22.944791  423409 ssh_runner.go:195] Run: systemctl --version
	I1202 19:57:22.944851  423409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:57:22.968532  423409 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:57:23.074793  423409 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:57:23.074887  423409 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:57:23.114626  423409 cri.go:89] found id: "72a3a94a8615446f6a8a6edf8cab89a31462a9125890a07caa7b5c08f54ee5d4"
	I1202 19:57:23.114660  423409 cri.go:89] found id: "3fc3b9c2bb5465a31c0448a05bdfa005e3690110089411631dc7f034b6d8ba5f"
	I1202 19:57:23.114667  423409 cri.go:89] found id: "23592b1014e085ea0e5ab3db08387563e82cae3f3801aefb1a36803352f4b32c"
	I1202 19:57:23.114672  423409 cri.go:89] found id: "4873f6a4745b98e6565829135d48f208fc3b8c8fc38349268058cfe66db69ace"
	I1202 19:57:23.114715  423409 cri.go:89] found id: "69202c0144e36fa98f89f3e4dcc0bb6766cd1a5e7765438a217890a210ccc213"
	I1202 19:57:23.114732  423409 cri.go:89] found id: "e272a50ae70cef4e55de4fc5c4b0afb42c240aef2f0e61c0f58d21f32bb4b1b8"
	I1202 19:57:23.114737  423409 cri.go:89] found id: "343bfc0b495bea2a196f645318c6f732f4aac4d10f89f12fe35398625eac34a6"
	I1202 19:57:23.114741  423409 cri.go:89] found id: "c935f2bdad559803c1b224bb424e2d6a8e3f939cc705debca52e51d3b73805cb"
	I1202 19:57:23.114745  423409 cri.go:89] found id: "2021a9af4b97cf9f19cd51daff4057de8ce4a98c1392ab4618729a6e1fdbe890"
	I1202 19:57:23.114755  423409 cri.go:89] found id: "7d3c2329b0b0c2e623e8d3059a441a596800bfcc5ff55d233343c158bb68d997"
	I1202 19:57:23.114759  423409 cri.go:89] found id: "33d9c5ffbca0f707ad94361bf00ebbc97925e1784dd973ef7bd8245741da9b67"
	I1202 19:57:23.114763  423409 cri.go:89] found id: "c59167a3c785bc464e3e63318df704b0084b4a2a24721b883033175b6f4b533f"
	I1202 19:57:23.114767  423409 cri.go:89] found id: "1d0670321bc4abe2d7954d0d6f908cf4e3863170f2e522b0100392c768577198"
	I1202 19:57:23.114771  423409 cri.go:89] found id: "9012f9d6215d108610b3c6096d8b9fd68c47c3b0a9ba15cab4f13cc9e385d4b9"
	I1202 19:57:23.114775  423409 cri.go:89] found id: "457ec4512e89c116a7c5ba880e93b4b91cf5fc694ff53ccf03533d6e1e36de9b"
	I1202 19:57:23.114787  423409 cri.go:89] found id: "91253d86ed19be0b0e1a31e49336ee85f71ca41d7f491fcc1fd6cd2978993ba0"
	I1202 19:57:23.114792  423409 cri.go:89] found id: "1a4586dbac8e8d1828435d72cdf3947bd1869e463e0102cc7b6664ebbeddeacf"
	I1202 19:57:23.114798  423409 cri.go:89] found id: "548b1d008679ffab8ee06c2f360e832860d3a7904cf593b25d31675b7bb892f9"
	I1202 19:57:23.114802  423409 cri.go:89] found id: "92d33e649bb3a4d1fbfddebd2c29786df9d0f9bbe8c4df37c931d3fb4cae82a7"
	I1202 19:57:23.114806  423409 cri.go:89] found id: "36e4834af56306fb62768ad3dbe8f24dbfa293561c0c41bee1c6d418ce06f454"
	I1202 19:57:23.114810  423409 cri.go:89] found id: "1053c12fee90a817e22976e0dc30541fb27c049e02c7c5af353833a13b30e982"
	I1202 19:57:23.114814  423409 cri.go:89] found id: "87e76e15e8595d052d66a4d86ee8b1416a8f60a669646ecbdd55cf8343b8db42"
	I1202 19:57:23.114817  423409 cri.go:89] found id: "54de7a8ca3420358423254cbf3d9a5a5e7140b7f46e22139375e40856000099c"
	I1202 19:57:23.114821  423409 cri.go:89] found id: "64bbafcaa8986f6e93390db3e1aa160fe3cecdd54cdd91e940adc5db87fefb45"
	I1202 19:57:23.114826  423409 cri.go:89] found id: ""
	I1202 19:57:23.114892  423409 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 19:57:23.133572  423409 out.go:203] 
	W1202 19:57:23.135153  423409 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:57:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:57:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 19:57:23.135181  423409 out.go:285] * 
	* 
	W1202 19:57:23.141080  423409 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:57:23.142882  423409 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-893295 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (12.27s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-bkjsl" [ee51e4e2-139f-407a-a020-b6a91e40e7bf] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004052742s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-893295 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-893295 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (266.495421ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:57:10.678033  421881 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:57:10.678211  421881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:57:10.678226  421881 out.go:374] Setting ErrFile to fd 2...
	I1202 19:57:10.678232  421881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:57:10.678573  421881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 19:57:10.678960  421881 mustload.go:66] Loading cluster: addons-893295
	I1202 19:57:10.679483  421881 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:10.679518  421881 addons.go:622] checking whether the cluster is paused
	I1202 19:57:10.679661  421881 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:10.679690  421881 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:57:10.680252  421881 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:57:10.702291  421881 ssh_runner.go:195] Run: systemctl --version
	I1202 19:57:10.702353  421881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:57:10.722294  421881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:57:10.822287  421881 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:57:10.822363  421881 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:57:10.852997  421881 cri.go:89] found id: "72a3a94a8615446f6a8a6edf8cab89a31462a9125890a07caa7b5c08f54ee5d4"
	I1202 19:57:10.853024  421881 cri.go:89] found id: "3fc3b9c2bb5465a31c0448a05bdfa005e3690110089411631dc7f034b6d8ba5f"
	I1202 19:57:10.853030  421881 cri.go:89] found id: "23592b1014e085ea0e5ab3db08387563e82cae3f3801aefb1a36803352f4b32c"
	I1202 19:57:10.853042  421881 cri.go:89] found id: "4873f6a4745b98e6565829135d48f208fc3b8c8fc38349268058cfe66db69ace"
	I1202 19:57:10.853046  421881 cri.go:89] found id: "69202c0144e36fa98f89f3e4dcc0bb6766cd1a5e7765438a217890a210ccc213"
	I1202 19:57:10.853050  421881 cri.go:89] found id: "e272a50ae70cef4e55de4fc5c4b0afb42c240aef2f0e61c0f58d21f32bb4b1b8"
	I1202 19:57:10.853055  421881 cri.go:89] found id: "343bfc0b495bea2a196f645318c6f732f4aac4d10f89f12fe35398625eac34a6"
	I1202 19:57:10.853059  421881 cri.go:89] found id: "c935f2bdad559803c1b224bb424e2d6a8e3f939cc705debca52e51d3b73805cb"
	I1202 19:57:10.853063  421881 cri.go:89] found id: "2021a9af4b97cf9f19cd51daff4057de8ce4a98c1392ab4618729a6e1fdbe890"
	I1202 19:57:10.853094  421881 cri.go:89] found id: "7d3c2329b0b0c2e623e8d3059a441a596800bfcc5ff55d233343c158bb68d997"
	I1202 19:57:10.853104  421881 cri.go:89] found id: "33d9c5ffbca0f707ad94361bf00ebbc97925e1784dd973ef7bd8245741da9b67"
	I1202 19:57:10.853109  421881 cri.go:89] found id: "c59167a3c785bc464e3e63318df704b0084b4a2a24721b883033175b6f4b533f"
	I1202 19:57:10.853114  421881 cri.go:89] found id: "1d0670321bc4abe2d7954d0d6f908cf4e3863170f2e522b0100392c768577198"
	I1202 19:57:10.853119  421881 cri.go:89] found id: "9012f9d6215d108610b3c6096d8b9fd68c47c3b0a9ba15cab4f13cc9e385d4b9"
	I1202 19:57:10.853124  421881 cri.go:89] found id: "457ec4512e89c116a7c5ba880e93b4b91cf5fc694ff53ccf03533d6e1e36de9b"
	I1202 19:57:10.853137  421881 cri.go:89] found id: "91253d86ed19be0b0e1a31e49336ee85f71ca41d7f491fcc1fd6cd2978993ba0"
	I1202 19:57:10.853150  421881 cri.go:89] found id: "1a4586dbac8e8d1828435d72cdf3947bd1869e463e0102cc7b6664ebbeddeacf"
	I1202 19:57:10.853158  421881 cri.go:89] found id: "548b1d008679ffab8ee06c2f360e832860d3a7904cf593b25d31675b7bb892f9"
	I1202 19:57:10.853163  421881 cri.go:89] found id: "92d33e649bb3a4d1fbfddebd2c29786df9d0f9bbe8c4df37c931d3fb4cae82a7"
	I1202 19:57:10.853167  421881 cri.go:89] found id: "36e4834af56306fb62768ad3dbe8f24dbfa293561c0c41bee1c6d418ce06f454"
	I1202 19:57:10.853175  421881 cri.go:89] found id: "1053c12fee90a817e22976e0dc30541fb27c049e02c7c5af353833a13b30e982"
	I1202 19:57:10.853184  421881 cri.go:89] found id: "87e76e15e8595d052d66a4d86ee8b1416a8f60a669646ecbdd55cf8343b8db42"
	I1202 19:57:10.853189  421881 cri.go:89] found id: "54de7a8ca3420358423254cbf3d9a5a5e7140b7f46e22139375e40856000099c"
	I1202 19:57:10.853193  421881 cri.go:89] found id: "64bbafcaa8986f6e93390db3e1aa160fe3cecdd54cdd91e940adc5db87fefb45"
	I1202 19:57:10.853198  421881 cri.go:89] found id: ""
	I1202 19:57:10.853247  421881 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 19:57:10.868051  421881 out.go:203] 
	W1202 19:57:10.869308  421881 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:57:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:57:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 19:57:10.869329  421881 out.go:285] * 
	* 
	W1202 19:57:10.873529  421881 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:57:10.874676  421881 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-893295 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-vvw4f" [31b15c06-5f8f-44a7-aff7-8f554087b68d] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003879182s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-893295 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-893295 addons disable yakd --alsologtostderr -v=1: exit status 11 (268.59219ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:57:16.012895  422269 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:57:16.013223  422269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:57:16.013235  422269 out.go:374] Setting ErrFile to fd 2...
	I1202 19:57:16.013239  422269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:57:16.013493  422269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 19:57:16.013839  422269 mustload.go:66] Loading cluster: addons-893295
	I1202 19:57:16.014236  422269 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:16.014262  422269 addons.go:622] checking whether the cluster is paused
	I1202 19:57:16.014386  422269 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:16.014405  422269 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:57:16.014892  422269 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:57:16.034310  422269 ssh_runner.go:195] Run: systemctl --version
	I1202 19:57:16.034374  422269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:57:16.054765  422269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:57:16.156834  422269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:57:16.156945  422269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:57:16.192956  422269 cri.go:89] found id: "72a3a94a8615446f6a8a6edf8cab89a31462a9125890a07caa7b5c08f54ee5d4"
	I1202 19:57:16.192984  422269 cri.go:89] found id: "3fc3b9c2bb5465a31c0448a05bdfa005e3690110089411631dc7f034b6d8ba5f"
	I1202 19:57:16.192990  422269 cri.go:89] found id: "23592b1014e085ea0e5ab3db08387563e82cae3f3801aefb1a36803352f4b32c"
	I1202 19:57:16.192996  422269 cri.go:89] found id: "4873f6a4745b98e6565829135d48f208fc3b8c8fc38349268058cfe66db69ace"
	I1202 19:57:16.193001  422269 cri.go:89] found id: "69202c0144e36fa98f89f3e4dcc0bb6766cd1a5e7765438a217890a210ccc213"
	I1202 19:57:16.193006  422269 cri.go:89] found id: "e272a50ae70cef4e55de4fc5c4b0afb42c240aef2f0e61c0f58d21f32bb4b1b8"
	I1202 19:57:16.193011  422269 cri.go:89] found id: "343bfc0b495bea2a196f645318c6f732f4aac4d10f89f12fe35398625eac34a6"
	I1202 19:57:16.193016  422269 cri.go:89] found id: "c935f2bdad559803c1b224bb424e2d6a8e3f939cc705debca52e51d3b73805cb"
	I1202 19:57:16.193020  422269 cri.go:89] found id: "2021a9af4b97cf9f19cd51daff4057de8ce4a98c1392ab4618729a6e1fdbe890"
	I1202 19:57:16.193029  422269 cri.go:89] found id: "7d3c2329b0b0c2e623e8d3059a441a596800bfcc5ff55d233343c158bb68d997"
	I1202 19:57:16.193046  422269 cri.go:89] found id: "33d9c5ffbca0f707ad94361bf00ebbc97925e1784dd973ef7bd8245741da9b67"
	I1202 19:57:16.193055  422269 cri.go:89] found id: "c59167a3c785bc464e3e63318df704b0084b4a2a24721b883033175b6f4b533f"
	I1202 19:57:16.193060  422269 cri.go:89] found id: "1d0670321bc4abe2d7954d0d6f908cf4e3863170f2e522b0100392c768577198"
	I1202 19:57:16.193091  422269 cri.go:89] found id: "9012f9d6215d108610b3c6096d8b9fd68c47c3b0a9ba15cab4f13cc9e385d4b9"
	I1202 19:57:16.193099  422269 cri.go:89] found id: "457ec4512e89c116a7c5ba880e93b4b91cf5fc694ff53ccf03533d6e1e36de9b"
	I1202 19:57:16.193128  422269 cri.go:89] found id: "91253d86ed19be0b0e1a31e49336ee85f71ca41d7f491fcc1fd6cd2978993ba0"
	I1202 19:57:16.193138  422269 cri.go:89] found id: "1a4586dbac8e8d1828435d72cdf3947bd1869e463e0102cc7b6664ebbeddeacf"
	I1202 19:57:16.193143  422269 cri.go:89] found id: "548b1d008679ffab8ee06c2f360e832860d3a7904cf593b25d31675b7bb892f9"
	I1202 19:57:16.193148  422269 cri.go:89] found id: "92d33e649bb3a4d1fbfddebd2c29786df9d0f9bbe8c4df37c931d3fb4cae82a7"
	I1202 19:57:16.193153  422269 cri.go:89] found id: "36e4834af56306fb62768ad3dbe8f24dbfa293561c0c41bee1c6d418ce06f454"
	I1202 19:57:16.193157  422269 cri.go:89] found id: "1053c12fee90a817e22976e0dc30541fb27c049e02c7c5af353833a13b30e982"
	I1202 19:57:16.193170  422269 cri.go:89] found id: "87e76e15e8595d052d66a4d86ee8b1416a8f60a669646ecbdd55cf8343b8db42"
	I1202 19:57:16.193178  422269 cri.go:89] found id: "54de7a8ca3420358423254cbf3d9a5a5e7140b7f46e22139375e40856000099c"
	I1202 19:57:16.193182  422269 cri.go:89] found id: "64bbafcaa8986f6e93390db3e1aa160fe3cecdd54cdd91e940adc5db87fefb45"
	I1202 19:57:16.193190  422269 cri.go:89] found id: ""
	I1202 19:57:16.193237  422269 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 19:57:16.209721  422269 out.go:203] 
	W1202 19:57:16.210937  422269 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:57:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:57:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 19:57:16.210955  422269 out.go:285] * 
	* 
	W1202 19:57:16.215056  422269 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:57:16.216435  422269 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-893295 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.27s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-nklpz" [9d4535df-fe2e-4f5a-8273-23b1b3e6d8b8] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.005501096s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-893295 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-893295 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (268.963768ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:57:21.286690  423017 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:57:21.286986  423017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:57:21.286997  423017 out.go:374] Setting ErrFile to fd 2...
	I1202 19:57:21.287002  423017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:57:21.287221  423017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 19:57:21.287522  423017 mustload.go:66] Loading cluster: addons-893295
	I1202 19:57:21.287850  423017 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:21.287871  423017 addons.go:622] checking whether the cluster is paused
	I1202 19:57:21.287947  423017 config.go:182] Loaded profile config "addons-893295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:21.287964  423017 host.go:66] Checking if "addons-893295" exists ...
	I1202 19:57:21.288411  423017 cli_runner.go:164] Run: docker container inspect addons-893295 --format={{.State.Status}}
	I1202 19:57:21.308051  423017 ssh_runner.go:195] Run: systemctl --version
	I1202 19:57:21.308159  423017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-893295
	I1202 19:57:21.326721  423017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/addons-893295/id_rsa Username:docker}
	I1202 19:57:21.428147  423017 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:57:21.428261  423017 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:57:21.465276  423017 cri.go:89] found id: "72a3a94a8615446f6a8a6edf8cab89a31462a9125890a07caa7b5c08f54ee5d4"
	I1202 19:57:21.465300  423017 cri.go:89] found id: "3fc3b9c2bb5465a31c0448a05bdfa005e3690110089411631dc7f034b6d8ba5f"
	I1202 19:57:21.465305  423017 cri.go:89] found id: "23592b1014e085ea0e5ab3db08387563e82cae3f3801aefb1a36803352f4b32c"
	I1202 19:57:21.465308  423017 cri.go:89] found id: "4873f6a4745b98e6565829135d48f208fc3b8c8fc38349268058cfe66db69ace"
	I1202 19:57:21.465311  423017 cri.go:89] found id: "69202c0144e36fa98f89f3e4dcc0bb6766cd1a5e7765438a217890a210ccc213"
	I1202 19:57:21.465315  423017 cri.go:89] found id: "e272a50ae70cef4e55de4fc5c4b0afb42c240aef2f0e61c0f58d21f32bb4b1b8"
	I1202 19:57:21.465318  423017 cri.go:89] found id: "343bfc0b495bea2a196f645318c6f732f4aac4d10f89f12fe35398625eac34a6"
	I1202 19:57:21.465320  423017 cri.go:89] found id: "c935f2bdad559803c1b224bb424e2d6a8e3f939cc705debca52e51d3b73805cb"
	I1202 19:57:21.465323  423017 cri.go:89] found id: "2021a9af4b97cf9f19cd51daff4057de8ce4a98c1392ab4618729a6e1fdbe890"
	I1202 19:57:21.465329  423017 cri.go:89] found id: "7d3c2329b0b0c2e623e8d3059a441a596800bfcc5ff55d233343c158bb68d997"
	I1202 19:57:21.465332  423017 cri.go:89] found id: "33d9c5ffbca0f707ad94361bf00ebbc97925e1784dd973ef7bd8245741da9b67"
	I1202 19:57:21.465337  423017 cri.go:89] found id: "c59167a3c785bc464e3e63318df704b0084b4a2a24721b883033175b6f4b533f"
	I1202 19:57:21.465341  423017 cri.go:89] found id: "1d0670321bc4abe2d7954d0d6f908cf4e3863170f2e522b0100392c768577198"
	I1202 19:57:21.465346  423017 cri.go:89] found id: "9012f9d6215d108610b3c6096d8b9fd68c47c3b0a9ba15cab4f13cc9e385d4b9"
	I1202 19:57:21.465350  423017 cri.go:89] found id: "457ec4512e89c116a7c5ba880e93b4b91cf5fc694ff53ccf03533d6e1e36de9b"
	I1202 19:57:21.465361  423017 cri.go:89] found id: "91253d86ed19be0b0e1a31e49336ee85f71ca41d7f491fcc1fd6cd2978993ba0"
	I1202 19:57:21.465370  423017 cri.go:89] found id: "1a4586dbac8e8d1828435d72cdf3947bd1869e463e0102cc7b6664ebbeddeacf"
	I1202 19:57:21.465377  423017 cri.go:89] found id: "548b1d008679ffab8ee06c2f360e832860d3a7904cf593b25d31675b7bb892f9"
	I1202 19:57:21.465382  423017 cri.go:89] found id: "92d33e649bb3a4d1fbfddebd2c29786df9d0f9bbe8c4df37c931d3fb4cae82a7"
	I1202 19:57:21.465387  423017 cri.go:89] found id: "36e4834af56306fb62768ad3dbe8f24dbfa293561c0c41bee1c6d418ce06f454"
	I1202 19:57:21.465391  423017 cri.go:89] found id: "1053c12fee90a817e22976e0dc30541fb27c049e02c7c5af353833a13b30e982"
	I1202 19:57:21.465394  423017 cri.go:89] found id: "87e76e15e8595d052d66a4d86ee8b1416a8f60a669646ecbdd55cf8343b8db42"
	I1202 19:57:21.465396  423017 cri.go:89] found id: "54de7a8ca3420358423254cbf3d9a5a5e7140b7f46e22139375e40856000099c"
	I1202 19:57:21.465399  423017 cri.go:89] found id: "64bbafcaa8986f6e93390db3e1aa160fe3cecdd54cdd91e940adc5db87fefb45"
	I1202 19:57:21.465402  423017 cri.go:89] found id: ""
	I1202 19:57:21.465446  423017 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 19:57:21.483178  423017 out.go:203] 
	W1202 19:57:21.484761  423017 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:57:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:57:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 19:57:21.484788  423017 out.go:285] * 
	* 
	W1202 19:57:21.490635  423017 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:57:21.492233  423017 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-893295 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-536475 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-536475 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-fpjpw" [43fa9860-79c4-42f8-8e1a-d4fcb75d7aa7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-536475 -n functional-536475
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-02 20:13:05.312934946 +0000 UTC m=+1133.862717582
functional_test.go:1645: (dbg) Run:  kubectl --context functional-536475 describe po hello-node-connect-7d85dfc575-fpjpw -n default
functional_test.go:1645: (dbg) kubectl --context functional-536475 describe po hello-node-connect-7d85dfc575-fpjpw -n default:
Name:             hello-node-connect-7d85dfc575-fpjpw
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-536475/192.168.49.2
Start Time:       Tue, 02 Dec 2025 20:03:04 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-shmnx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-shmnx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-fpjpw to functional-536475
Normal   Pulling    6m59s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m59s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m59s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m52s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m52s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-536475 logs hello-node-connect-7d85dfc575-fpjpw -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-536475 logs hello-node-connect-7d85dfc575-fpjpw -n default: exit status 1 (65.292914ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-fpjpw" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-536475 logs hello-node-connect-7d85dfc575-fpjpw -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-536475 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-fpjpw
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-536475/192.168.49.2
Start Time:       Tue, 02 Dec 2025 20:03:04 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-shmnx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-shmnx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-fpjpw to functional-536475
Normal   Pulling    6m59s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m59s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m59s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m52s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m52s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-536475 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-536475 logs -l app=hello-node-connect: exit status 1 (64.096883ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-fpjpw" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-536475 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-536475 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.97.179.176
IPs:                      10.97.179.176
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32276/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-536475
helpers_test.go:243: (dbg) docker inspect functional-536475:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "45c2a75750ad72934d982aae2d636ef33be872c45c91cc6630a9bc364cfa84f2",
	        "Created": "2025-12-02T20:01:04.087840977Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 435312,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T20:01:04.121884586Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/45c2a75750ad72934d982aae2d636ef33be872c45c91cc6630a9bc364cfa84f2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/45c2a75750ad72934d982aae2d636ef33be872c45c91cc6630a9bc364cfa84f2/hostname",
	        "HostsPath": "/var/lib/docker/containers/45c2a75750ad72934d982aae2d636ef33be872c45c91cc6630a9bc364cfa84f2/hosts",
	        "LogPath": "/var/lib/docker/containers/45c2a75750ad72934d982aae2d636ef33be872c45c91cc6630a9bc364cfa84f2/45c2a75750ad72934d982aae2d636ef33be872c45c91cc6630a9bc364cfa84f2-json.log",
	        "Name": "/functional-536475",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-536475:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-536475",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "45c2a75750ad72934d982aae2d636ef33be872c45c91cc6630a9bc364cfa84f2",
	                "LowerDir": "/var/lib/docker/overlay2/c7178298ef54df3f8881e6b754005db60e2b75bdc4077757d956c58ffcd9d7a9-init/diff:/var/lib/docker/overlay2/49bb44e987885c48600d5ae7ad3c81cad82a00f38070c0460882c5746d9fae59/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7178298ef54df3f8881e6b754005db60e2b75bdc4077757d956c58ffcd9d7a9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7178298ef54df3f8881e6b754005db60e2b75bdc4077757d956c58ffcd9d7a9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7178298ef54df3f8881e6b754005db60e2b75bdc4077757d956c58ffcd9d7a9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-536475",
	                "Source": "/var/lib/docker/volumes/functional-536475/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-536475",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-536475",
	                "name.minikube.sigs.k8s.io": "functional-536475",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cf9faa18b28f546cc9a96ce6397f091d4e299296f6e1af3c4ab82716fbad1ddc",
	            "SandboxKey": "/var/run/docker/netns/cf9faa18b28f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-536475": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "691743c113e59c6a17342841c37db036ccaf940a1ffb956f3c38948dda097042",
	                    "EndpointID": "c3ead526a5f3ee178c1570653e7cea68dc57007d56cb24c102fd16930bc34c7e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "9e:13:ad:30:fc:dc",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-536475",
	                        "45c2a75750ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-536475 -n functional-536475
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-536475 logs -n 25: (1.353747937s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-536475 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2340587714/001:/mount3 --alsologtostderr -v=1 │ functional-536475 │ jenkins │ v1.37.0 │ 02 Dec 25 20:03 UTC │                     │
	│ ssh            │ functional-536475 ssh findmnt -T /mount1                                                                           │ functional-536475 │ jenkins │ v1.37.0 │ 02 Dec 25 20:03 UTC │                     │
	│ ssh            │ functional-536475 ssh findmnt -T /mount1                                                                           │ functional-536475 │ jenkins │ v1.37.0 │ 02 Dec 25 20:03 UTC │ 02 Dec 25 20:03 UTC │
	│ ssh            │ functional-536475 ssh findmnt -T /mount2                                                                           │ functional-536475 │ jenkins │ v1.37.0 │ 02 Dec 25 20:03 UTC │ 02 Dec 25 20:03 UTC │
	│ ssh            │ functional-536475 ssh findmnt -T /mount3                                                                           │ functional-536475 │ jenkins │ v1.37.0 │ 02 Dec 25 20:03 UTC │ 02 Dec 25 20:03 UTC │
	│ mount          │ -p functional-536475 --kill=true                                                                                   │ functional-536475 │ jenkins │ v1.37.0 │ 02 Dec 25 20:03 UTC │                     │
	│ start          │ -p functional-536475 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-536475 │ jenkins │ v1.37.0 │ 02 Dec 25 20:03 UTC │                     │
	│ start          │ -p functional-536475 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ functional-536475 │ jenkins │ v1.37.0 │ 02 Dec 25 20:03 UTC │                     │
	│ start          │ -p functional-536475 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-536475 │ jenkins │ v1.37.0 │ 02 Dec 25 20:03 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-536475 --alsologtostderr -v=1                                                     │ functional-536475 │ jenkins │ v1.37.0 │ 02 Dec 25 20:03 UTC │ 02 Dec 25 20:03 UTC │
	│ update-context │ functional-536475 update-context --alsologtostderr -v=2                                                            │ functional-536475 │ jenkins │ v1.37.0 │ 02 Dec 25 20:03 UTC │ 02 Dec 25 20:03 UTC │
	│ update-context │ functional-536475 update-context --alsologtostderr -v=2                                                            │ functional-536475 │ jenkins │ v1.37.0 │ 02 Dec 25 20:03 UTC │ 02 Dec 25 20:03 UTC │
	│ update-context │ functional-536475 update-context --alsologtostderr -v=2                                                            │ functional-536475 │ jenkins │ v1.37.0 │ 02 Dec 25 20:03 UTC │ 02 Dec 25 20:03 UTC │
	│ image          │ functional-536475 image ls --format short --alsologtostderr                                                        │ functional-536475 │ jenkins │ v1.37.0 │ 02 Dec 25 20:03 UTC │ 02 Dec 25 20:03 UTC │
	│ image          │ functional-536475 image ls --format yaml --alsologtostderr                                                         │ functional-536475 │ jenkins │ v1.37.0 │ 02 Dec 25 20:03 UTC │ 02 Dec 25 20:03 UTC │
	│ ssh            │ functional-536475 ssh pgrep buildkitd                                                                              │ functional-536475 │ jenkins │ v1.37.0 │ 02 Dec 25 20:03 UTC │                     │
	│ image          │ functional-536475 image build -t localhost/my-image:functional-536475 testdata/build --alsologtostderr             │ functional-536475 │ jenkins │ v1.37.0 │ 02 Dec 25 20:03 UTC │ 02 Dec 25 20:03 UTC │
	│ image          │ functional-536475 image ls                                                                                         │ functional-536475 │ jenkins │ v1.37.0 │ 02 Dec 25 20:03 UTC │ 02 Dec 25 20:03 UTC │
	│ image          │ functional-536475 image ls --format json --alsologtostderr                                                         │ functional-536475 │ jenkins │ v1.37.0 │ 02 Dec 25 20:03 UTC │ 02 Dec 25 20:03 UTC │
	│ image          │ functional-536475 image ls --format table --alsologtostderr                                                        │ functional-536475 │ jenkins │ v1.37.0 │ 02 Dec 25 20:03 UTC │ 02 Dec 25 20:03 UTC │
	│ service        │ functional-536475 service list                                                                                     │ functional-536475 │ jenkins │ v1.37.0 │ 02 Dec 25 20:12 UTC │ 02 Dec 25 20:12 UTC │
	│ service        │ functional-536475 service list -o json                                                                             │ functional-536475 │ jenkins │ v1.37.0 │ 02 Dec 25 20:12 UTC │ 02 Dec 25 20:12 UTC │
	│ service        │ functional-536475 service --namespace=default --https --url hello-node                                             │ functional-536475 │ jenkins │ v1.37.0 │ 02 Dec 25 20:12 UTC │                     │
	│ service        │ functional-536475 service hello-node --url --format={{.IP}}                                                        │ functional-536475 │ jenkins │ v1.37.0 │ 02 Dec 25 20:12 UTC │                     │
	│ service        │ functional-536475 service hello-node --url                                                                         │ functional-536475 │ jenkins │ v1.37.0 │ 02 Dec 25 20:12 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:03:19
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:03:19.514807  450734 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:03:19.515103  450734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:03:19.515114  450734 out.go:374] Setting ErrFile to fd 2...
	I1202 20:03:19.515118  450734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:03:19.515455  450734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:03:19.515940  450734 out.go:368] Setting JSON to false
	I1202 20:03:19.517003  450734 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6344,"bootTime":1764699456,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:03:19.517088  450734 start.go:143] virtualization: kvm guest
	I1202 20:03:19.518607  450734 out.go:179] * [functional-536475] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1202 20:03:19.519957  450734 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:03:19.519959  450734 notify.go:221] Checking for updates...
	I1202 20:03:19.522291  450734 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:03:19.523431  450734 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:03:19.524587  450734 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 20:03:19.525629  450734 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:03:19.526551  450734 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:03:19.528031  450734 config.go:182] Loaded profile config "functional-536475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:03:19.528657  450734 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:03:19.554866  450734 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 20:03:19.554962  450734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:03:19.618156  450734 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 20:03:19.6067245 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:03:19.618287  450734 docker.go:319] overlay module found
	I1202 20:03:19.619766  450734 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1202 20:03:19.620763  450734 start.go:309] selected driver: docker
	I1202 20:03:19.620779  450734 start.go:927] validating driver "docker" against &{Name:functional-536475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-536475 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:03:19.620882  450734 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:03:19.622670  450734 out.go:203] 
	W1202 20:03:19.624214  450734 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1202 20:03:19.625163  450734 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 20:03:22 functional-536475 crio[3575]: time="2025-12-02T20:03:22.761088258Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:03:22 functional-536475 crio[3575]: time="2025-12-02T20:03:22.761314419Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/639b6b19aa5b50d4eace420ead72db264854a68eb9ebdac88d67fc5def4c4660/merged/etc/group: no such file or directory"
	Dec 02 20:03:22 functional-536475 crio[3575]: time="2025-12-02T20:03:22.761662325Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:03:22 functional-536475 crio[3575]: time="2025-12-02T20:03:22.790719876Z" level=info msg="Created container 7d96d05be903d9fbba662a344d7f3d05623dde94a1b8c3601d090f26bcfba589: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-spcgp/dashboard-metrics-scraper" id=35701197-9d5d-4aeb-b3b8-5c71ac7c2447 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:03:22 functional-536475 crio[3575]: time="2025-12-02T20:03:22.79160153Z" level=info msg="Starting container: 7d96d05be903d9fbba662a344d7f3d05623dde94a1b8c3601d090f26bcfba589" id=164b0d98-ec39-42d6-8f84-e739cc7be6e0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:03:22 functional-536475 crio[3575]: time="2025-12-02T20:03:22.793694675Z" level=info msg="Started container" PID=7573 containerID=7d96d05be903d9fbba662a344d7f3d05623dde94a1b8c3601d090f26bcfba589 description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-spcgp/dashboard-metrics-scraper id=164b0d98-ec39-42d6-8f84-e739cc7be6e0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=58491e3701d124c245b1faedaf239c520defd05799b2bc17a80e2477aeedf78f
	Dec 02 20:03:26 functional-536475 crio[3575]: time="2025-12-02T20:03:26.416981639Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=40300b7c-2275-4e6d-801d-0e6c611917bc name=/runtime.v1.ImageService/PullImage
	Dec 02 20:03:26 functional-536475 crio[3575]: time="2025-12-02T20:03:26.417719277Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=df8051fc-30a8-4ee8-b542-c8425197cf3e name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:03:26 functional-536475 crio[3575]: time="2025-12-02T20:03:26.41950201Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=60cd7cc4-96f9-4c51-9638-e0b2932f305b name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:03:26 functional-536475 crio[3575]: time="2025-12-02T20:03:26.423781231Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5z7lx/kubernetes-dashboard" id=ced43447-926d-4cfd-8324-65cac4b3a5a9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:03:26 functional-536475 crio[3575]: time="2025-12-02T20:03:26.423972992Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:03:26 functional-536475 crio[3575]: time="2025-12-02T20:03:26.428007742Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:03:26 functional-536475 crio[3575]: time="2025-12-02T20:03:26.428204178Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f3be0ac6b525467fbf93941d97191d6f2b4d2b1e3f9175226e403d98547592ab/merged/etc/group: no such file or directory"
	Dec 02 20:03:26 functional-536475 crio[3575]: time="2025-12-02T20:03:26.428508859Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:03:26 functional-536475 crio[3575]: time="2025-12-02T20:03:26.457557413Z" level=info msg="Created container 7e3168c9d862a0a7619ac8b93028906bfcd380ace1b8f2d6ea1928d28ab2a9a5: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5z7lx/kubernetes-dashboard" id=ced43447-926d-4cfd-8324-65cac4b3a5a9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:03:26 functional-536475 crio[3575]: time="2025-12-02T20:03:26.458226716Z" level=info msg="Starting container: 7e3168c9d862a0a7619ac8b93028906bfcd380ace1b8f2d6ea1928d28ab2a9a5" id=06d11373-80e5-4c6d-8fd3-715ee3a4f03f name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:03:26 functional-536475 crio[3575]: time="2025-12-02T20:03:26.460093972Z" level=info msg="Started container" PID=8076 containerID=7e3168c9d862a0a7619ac8b93028906bfcd380ace1b8f2d6ea1928d28ab2a9a5 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5z7lx/kubernetes-dashboard id=06d11373-80e5-4c6d-8fd3-715ee3a4f03f name=/runtime.v1.RuntimeService/StartContainer sandboxID=40ae7c6e449edd20393c90c4653cb12a5efc2d27f8c14bd3422cc0722ee237e9
	Dec 02 20:03:41 functional-536475 crio[3575]: time="2025-12-02T20:03:41.235644895Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8ef080ae-df87-4281-abf0-29906347c31a name=/runtime.v1.ImageService/PullImage
	Dec 02 20:03:42 functional-536475 crio[3575]: time="2025-12-02T20:03:42.235473307Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=9a795ecd-da4f-4b9c-a6d3-4927d69ab73c name=/runtime.v1.ImageService/PullImage
	Dec 02 20:04:32 functional-536475 crio[3575]: time="2025-12-02T20:04:32.236048212Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b9cc15da-2a25-48a8-b0a3-6c7cde473af5 name=/runtime.v1.ImageService/PullImage
	Dec 02 20:04:35 functional-536475 crio[3575]: time="2025-12-02T20:04:35.235472205Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=285ca0ae-2534-49e5-8086-f902ed2abbd4 name=/runtime.v1.ImageService/PullImage
	Dec 02 20:05:59 functional-536475 crio[3575]: time="2025-12-02T20:05:59.235891833Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=0f6e7cf5-6ec3-4655-b8a5-823b2c8d7776 name=/runtime.v1.ImageService/PullImage
	Dec 02 20:06:06 functional-536475 crio[3575]: time="2025-12-02T20:06:06.237255309Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d12820df-083b-4f72-a56b-bddb435506c5 name=/runtime.v1.ImageService/PullImage
	Dec 02 20:08:50 functional-536475 crio[3575]: time="2025-12-02T20:08:50.23634485Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5a027e13-dcd3-44ca-ae09-f3d25a26122b name=/runtime.v1.ImageService/PullImage
	Dec 02 20:08:50 functional-536475 crio[3575]: time="2025-12-02T20:08:50.237061038Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ac39834c-a51f-4f00-aa85-52ce6a86d13a name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	7e3168c9d862a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   40ae7c6e449ed       kubernetes-dashboard-855c9754f9-5z7lx        kubernetes-dashboard
	7d96d05be903d       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   58491e3701d12       dashboard-metrics-scraper-77bf4d6c4c-spcgp   kubernetes-dashboard
	69f4382a752a0       docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541                  9 minutes ago       Running             myfrontend                  0                   514992ba15134       sp-pod                                       default
	b422aeb0fab91       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   8d6507cdf0257       busybox-mount                                default
	9657603bcfb9f       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  10 minutes ago      Running             nginx                       0                   f604af2cc7e1e       nginx-svc                                    default
	6a817d92a684a       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  10 minutes ago      Running             mysql                       0                   b452bd83ba20d       mysql-5bb876957f-j8cq5                       default
	18b819fdfdbd7       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                 10 minutes ago      Running             kube-apiserver              0                   40001084b2735       kube-apiserver-functional-536475             kube-system
	4fbf66f1f1638       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                 10 minutes ago      Running             kube-controller-manager     2                   fe14f04382982       kube-controller-manager-functional-536475    kube-system
	771e66a82c39a       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 10 minutes ago      Running             etcd                        1                   3f50baa0a5461       etcd-functional-536475                       kube-system
	a873056639cd3       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                 11 minutes ago      Exited              kube-controller-manager     1                   fe14f04382982       kube-controller-manager-functional-536475    kube-system
	0fd3e87e380a4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   07e49685c4a21       coredns-66bc5c9577-zsmf4                     kube-system
	26d8d1fc15e04       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Running             storage-provisioner         1                   a64e8b8825f98       storage-provisioner                          kube-system
	e389f07df50c3       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                 11 minutes ago      Running             kube-proxy                  1                   802ffe859ae23       kube-proxy-gd5j7                             kube-system
	c0255d6dd66f6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   ef2929101a7e9       kindnet-rtsj2                                kube-system
	aae70adc6dd17       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                 11 minutes ago      Running             kube-scheduler              1                   d59109b4c3f1a       kube-scheduler-functional-536475             kube-system
	f232084f705e6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   07e49685c4a21       coredns-66bc5c9577-zsmf4                     kube-system
	c7c54e1c1555a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   a64e8b8825f98       storage-provisioner                          kube-system
	452f8a3a5ca14       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   ef2929101a7e9       kindnet-rtsj2                                kube-system
	64e65ab14985d       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                 11 minutes ago      Exited              kube-proxy                  0                   802ffe859ae23       kube-proxy-gd5j7                             kube-system
	50eb733a1ea0e       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                 11 minutes ago      Exited              kube-scheduler              0                   d59109b4c3f1a       kube-scheduler-functional-536475             kube-system
	6a3c655f3f2b2       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 11 minutes ago      Exited              etcd                        0                   3f50baa0a5461       etcd-functional-536475                       kube-system
	
	
	==> coredns [0fd3e87e380a4e0a76fc2e505d24b0f669f2eca021af47c599944e0dd2cdf38e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42469 - 12619 "HINFO IN 1006733709643989095.429842628389098275. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.022778727s
	
	
	==> coredns [f232084f705e65568cf987c5a6652063f55b746653153bec8eb81901fa620150] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47003 - 37508 "HINFO IN 4923326009302588622.3446798046552459308. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023359764s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-536475
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-536475
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=functional-536475
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T20_01_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 20:01:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-536475
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:13:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:12:26 +0000   Tue, 02 Dec 2025 20:01:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:12:26 +0000   Tue, 02 Dec 2025 20:01:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:12:26 +0000   Tue, 02 Dec 2025 20:01:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:12:26 +0000   Tue, 02 Dec 2025 20:01:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-536475
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                53a56975-551d-42c6-af56-c41b732c13d5
	  Boot ID:                    9dd5456f-e394-4b4b-9458-48d26faf507e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-fvwcb                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-fpjpw           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-j8cq5                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  kube-system                 coredns-66bc5c9577-zsmf4                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-536475                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-rtsj2                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-536475              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-536475     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-gd5j7                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-536475              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-spcgp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-5z7lx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-536475 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-536475 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-536475 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-536475 event: Registered Node functional-536475 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-536475 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x9 over 10m)  kubelet          Node functional-536475 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-536475 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-536475 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-536475 event: Registered Node functional-536475 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e f4 c0 f2 56 fb 08 06
	[  +0.000355] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 95 9a 02 fc fb 08 06
	[Dec 2 19:57] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000013] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +1.020139] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +1.023921] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +1.023940] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +1.023881] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +2.047855] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +4.031797] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +8.191553] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[Dec 2 19:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[ +32.254238] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	
	
	==> etcd [6a3c655f3f2b289a5ef4244bc70fc704d5ec36008ff8a9df37fdd4d5e6b4b7cf] <==
	{"level":"warn","ts":"2025-12-02T20:01:17.389430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:01:17.397488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:01:17.411392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:01:17.418124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:01:17.447652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:01:17.455108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:01:17.498965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38382","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T20:02:03.338353Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-02T20:02:03.338442Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-536475","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-02T20:02:03.338583Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T20:02:03.338723Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T20:02:10.340294Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-02T20:02:10.340368Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T20:02:10.340523Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T20:02:10.340542Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-02T20:02:10.340410Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T20:02:10.340559Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T20:02:10.340569Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T20:02:10.340445Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-02T20:02:10.340605Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-02T20:02:10.340617Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-02T20:02:10.343139Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-02T20:02:10.343218Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T20:02:10.343253Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-02T20:02:10.343283Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-536475","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [771e66a82c39ad1e4c342b8a78fabfba7f3d392ea0c0e6c162b953d46d962e7e] <==
	{"level":"warn","ts":"2025-12-02T20:02:23.551868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:02:23.558726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:02:23.565851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:02:23.572577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:02:23.579700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:02:23.587489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:02:23.594679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:02:23.601630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:02:23.609084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:02:23.617292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:02:23.624102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:02:23.632091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:02:23.638697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:02:23.645529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:02:23.652792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:02:23.660023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:02:23.666737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:02:23.686772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:02:23.694181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:02:23.701311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:02:23.752103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58004","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T20:02:55.920632Z","caller":"traceutil/trace.go:172","msg":"trace[1419510942] transaction","detail":"{read_only:false; response_revision:637; number_of_response:1; }","duration":"128.752621ms","start":"2025-12-02T20:02:55.791855Z","end":"2025-12-02T20:02:55.920608Z","steps":["trace[1419510942] 'process raft request'  (duration: 128.612382ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T20:12:23.215920Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1109}
	{"level":"info","ts":"2025-12-02T20:12:23.236684Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1109,"took":"20.404792ms","hash":4122620791,"current-db-size-bytes":3371008,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1441792,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-12-02T20:12:23.236736Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4122620791,"revision":1109,"compact-revision":-1}
	
	
	==> kernel <==
	 20:13:06 up  1:55,  0 user,  load average: 0.27, 0.31, 0.92
	Linux functional-536475 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [452f8a3a5ca14fcf229c3e6f09bbe640374fcacc3f98f88fd73a610be51ed1f9] <==
	I1202 20:01:26.345859       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 20:01:26.346212       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1202 20:01:26.346455       1 main.go:148] setting mtu 1500 for CNI 
	I1202 20:01:26.346475       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 20:01:26.346495       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T20:01:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 20:01:26.547018       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 20:01:26.547099       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 20:01:26.547120       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 20:01:26.547979       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 20:01:26.847212       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 20:01:26.847250       1 metrics.go:72] Registering metrics
	I1202 20:01:26.847329       1 controller.go:711] "Syncing nftables rules"
	I1202 20:01:36.547886       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:01:36.548016       1 main.go:301] handling current node
	I1202 20:01:46.555228       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:01:46.555267       1 main.go:301] handling current node
	I1202 20:01:56.551209       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:01:56.551256       1 main.go:301] handling current node
	
	
	==> kindnet [c0255d6dd66f630f033be18ff2cd93251180344c8ffbfa6df9fc80e7450d73c1] <==
	I1202 20:11:03.785427       1 main.go:301] handling current node
	I1202 20:11:13.785811       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:11:13.785859       1 main.go:301] handling current node
	I1202 20:11:23.784416       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:11:23.784465       1 main.go:301] handling current node
	I1202 20:11:33.784142       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:11:33.784181       1 main.go:301] handling current node
	I1202 20:11:43.786162       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:11:43.786228       1 main.go:301] handling current node
	I1202 20:11:53.792696       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:11:53.792735       1 main.go:301] handling current node
	I1202 20:12:03.784541       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:12:03.784589       1 main.go:301] handling current node
	I1202 20:12:13.783958       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:12:13.784003       1 main.go:301] handling current node
	I1202 20:12:23.784797       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:12:23.784843       1 main.go:301] handling current node
	I1202 20:12:33.785060       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:12:33.785122       1 main.go:301] handling current node
	I1202 20:12:43.787004       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:12:43.787050       1 main.go:301] handling current node
	I1202 20:12:53.788201       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:12:53.788238       1 main.go:301] handling current node
	I1202 20:13:03.785279       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:13:03.785323       1 main.go:301] handling current node
	
	
	==> kube-apiserver [18b819fdfdbd7c3fdc55e92aa71b03c01502c5402b9f7e557d5ceede652e271b] <==
	I1202 20:02:24.247675       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 20:02:24.266157       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 20:02:25.131233       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1202 20:02:25.436732       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1202 20:02:25.438170       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 20:02:25.442962       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 20:02:26.095088       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 20:02:26.196977       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 20:02:26.256141       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 20:02:26.262686       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 20:02:42.059278       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.107.183.15"}
	I1202 20:02:47.518239       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.106.9.40"}
	I1202 20:02:47.558773       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 20:02:48.967037       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.190.44"}
	I1202 20:02:50.583946       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.125.9"}
	E1202 20:03:01.661528       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:41252: use of closed network connection
	E1202 20:03:02.766243       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:41258: use of closed network connection
	E1202 20:03:04.669984       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:41276: use of closed network connection
	I1202 20:03:04.961226       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.179.176"}
	E1202 20:03:11.541245       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:57966: use of closed network connection
	E1202 20:03:19.677915       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:40264: use of closed network connection
	I1202 20:03:20.564758       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 20:03:20.686746       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.18.167"}
	I1202 20:03:20.706169       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.220.172"}
	I1202 20:12:24.153935       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [4fbf66f1f1638826328b01fe93b451b0269e40b934037c4fa202df7b4dafa20b] <==
	I1202 20:02:27.589472       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1202 20:02:27.589767       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 20:02:27.589784       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1202 20:02:27.589970       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 20:02:27.590210       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1202 20:02:27.591731       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1202 20:02:27.593380       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 20:02:27.593397       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 20:02:27.593412       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 20:02:27.595586       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 20:02:27.595970       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1202 20:02:27.596023       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1202 20:02:27.599019       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1202 20:02:27.600223       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1202 20:02:27.600238       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1202 20:02:27.601376       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1202 20:02:27.603671       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1202 20:02:27.605630       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1202 20:02:27.611578       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1202 20:03:20.615155       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 20:03:20.619553       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 20:03:20.621031       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 20:03:20.623672       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 20:03:20.625560       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 20:03:20.630466       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [a873056639cd325b145f181829df58326fdf792485df421385c2ed8eb6904621] <==
	I1202 20:02:04.731579       1 serving.go:386] Generated self-signed cert in-memory
	I1202 20:02:05.186607       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1202 20:02:05.186630       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:02:05.188987       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1202 20:02:05.189053       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1202 20:02:05.189528       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1202 20:02:05.189559       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 20:02:05.196891       1 controllermanager.go:781] "Started controller" controller="serviceaccount-token-controller"
	I1202 20:02:05.196961       1 shared_informer.go:349] "Waiting for caches to sync" controller="tokens"
	I1202 20:02:13.504666       1 controllermanager.go:781] "Started controller" controller="daemonset-controller"
	I1202 20:02:13.504826       1 daemon_controller.go:310] "Starting daemon sets controller" logger="daemonset-controller"
	I1202 20:02:13.504844       1 shared_informer.go:349] "Waiting for caches to sync" controller="daemon sets"
	F1202 20:02:13.505051       1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/ephemeral-volume-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-proxy [64e65ab14985dff2b0c169d3a240790355abc76f90429a97c8fe3f298a2838dc] <==
	I1202 20:01:26.133028       1 server_linux.go:53] "Using iptables proxy"
	I1202 20:01:26.194335       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 20:01:26.295307       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 20:01:26.295348       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 20:01:26.295442       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 20:01:26.318000       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 20:01:26.318097       1 server_linux.go:132] "Using iptables Proxier"
	I1202 20:01:26.324617       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 20:01:26.325026       1 server.go:527] "Version info" version="v1.34.2"
	I1202 20:01:26.325096       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:01:26.326825       1 config.go:106] "Starting endpoint slice config controller"
	I1202 20:01:26.326862       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 20:01:26.327554       1 config.go:200] "Starting service config controller"
	I1202 20:01:26.327579       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 20:01:26.327688       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 20:01:26.327710       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 20:01:26.328330       1 config.go:309] "Starting node config controller"
	I1202 20:01:26.328357       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 20:01:26.328370       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 20:01:26.427552       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 20:01:26.428744       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 20:01:26.428765       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [e389f07df50c3533ac8991387ded6cc6ae6c83db1eda269088be17e6d240fbd7] <==
	I1202 20:02:03.427093       1 server_linux.go:53] "Using iptables proxy"
	I1202 20:02:03.498913       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 20:02:03.600096       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 20:02:03.600145       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 20:02:03.600278       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 20:02:03.619628       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 20:02:03.619678       1 server_linux.go:132] "Using iptables Proxier"
	I1202 20:02:03.625334       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 20:02:03.625675       1 server.go:527] "Version info" version="v1.34.2"
	I1202 20:02:03.625712       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:02:03.627207       1 config.go:106] "Starting endpoint slice config controller"
	I1202 20:02:03.627314       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 20:02:03.627255       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 20:02:03.627353       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 20:02:03.627218       1 config.go:200] "Starting service config controller"
	I1202 20:02:03.627388       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 20:02:03.627251       1 config.go:309] "Starting node config controller"
	I1202 20:02:03.627425       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 20:02:03.727473       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 20:02:03.727493       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 20:02:03.727503       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 20:02:03.727493       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [50eb733a1ea0e3e7db05f833cf7ff5a13a7104b2a5649ce90337c0a3854a671d] <==
	E1202 20:01:17.910686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 20:01:17.910717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 20:01:17.910740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 20:01:17.910871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 20:01:17.910875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 20:01:17.910903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 20:01:17.910976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 20:01:18.773776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 20:01:18.792349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 20:01:18.876985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 20:01:18.959732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 20:01:18.963038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 20:01:19.068673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 20:01:19.075529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 20:01:19.099705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 20:01:19.147031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 20:01:19.152258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 20:01:19.193479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1202 20:01:19.506853       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:02:03.114000       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1202 20:02:03.114101       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:02:03.114253       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1202 20:02:03.114307       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1202 20:02:03.114376       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1202 20:02:03.114426       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [aae70adc6dd17908b0171fc1051091d80fdf4dd7f6c093526690a4e3294c429c] <==
	I1202 20:02:03.848655       1 serving.go:386] Generated self-signed cert in-memory
	I1202 20:02:12.981837       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1202 20:02:12.981868       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:02:12.986514       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1202 20:02:12.986545       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:02:12.986554       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1202 20:02:12.986571       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1202 20:02:12.986583       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1202 20:02:12.986569       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:02:12.987526       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 20:02:12.987759       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 20:02:13.086955       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:02:13.087027       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1202 20:02:13.087133       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	E1202 20:02:24.150950       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 20:02:24.151050       1 reflector.go:205] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 20:02:24.153028       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 20:02:24.153081       1 reflector.go:205] "Failed to watch" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 20:02:24.153248       1 reflector.go:205] "Failed to watch" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 20:02:24.153278       1 reflector.go:205] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 20:02:24.153297       1 reflector.go:205] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 20:02:24.153363       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 20:02:24.153457       1 reflector.go:205] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 20:02:24.153477       1 reflector.go:205] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 20:02:24.153587       1 reflector.go:205] "Failed to watch" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	
	
	==> kubelet <==
	Dec 02 20:10:34 functional-536475 kubelet[4328]: E1202 20:10:34.235409    4328 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fpjpw" podUID="43fa9860-79c4-42f8-8e1a-d4fcb75d7aa7"
	Dec 02 20:10:39 functional-536475 kubelet[4328]: E1202 20:10:39.235332    4328 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-fvwcb" podUID="8a624459-3c37-468a-aa43-2c66dde94dd4"
	Dec 02 20:10:49 functional-536475 kubelet[4328]: E1202 20:10:49.234859    4328 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fpjpw" podUID="43fa9860-79c4-42f8-8e1a-d4fcb75d7aa7"
	Dec 02 20:10:51 functional-536475 kubelet[4328]: E1202 20:10:51.235150    4328 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-fvwcb" podUID="8a624459-3c37-468a-aa43-2c66dde94dd4"
	Dec 02 20:11:01 functional-536475 kubelet[4328]: E1202 20:11:01.235435    4328 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fpjpw" podUID="43fa9860-79c4-42f8-8e1a-d4fcb75d7aa7"
	Dec 02 20:11:04 functional-536475 kubelet[4328]: E1202 20:11:04.235272    4328 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-fvwcb" podUID="8a624459-3c37-468a-aa43-2c66dde94dd4"
	Dec 02 20:11:13 functional-536475 kubelet[4328]: E1202 20:11:13.234673    4328 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fpjpw" podUID="43fa9860-79c4-42f8-8e1a-d4fcb75d7aa7"
	Dec 02 20:11:16 functional-536475 kubelet[4328]: E1202 20:11:16.235475    4328 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-fvwcb" podUID="8a624459-3c37-468a-aa43-2c66dde94dd4"
	Dec 02 20:11:25 functional-536475 kubelet[4328]: E1202 20:11:25.235176    4328 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fpjpw" podUID="43fa9860-79c4-42f8-8e1a-d4fcb75d7aa7"
	Dec 02 20:11:27 functional-536475 kubelet[4328]: E1202 20:11:27.234747    4328 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-fvwcb" podUID="8a624459-3c37-468a-aa43-2c66dde94dd4"
	Dec 02 20:11:36 functional-536475 kubelet[4328]: E1202 20:11:36.234851    4328 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fpjpw" podUID="43fa9860-79c4-42f8-8e1a-d4fcb75d7aa7"
	Dec 02 20:11:41 functional-536475 kubelet[4328]: E1202 20:11:41.235155    4328 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-fvwcb" podUID="8a624459-3c37-468a-aa43-2c66dde94dd4"
	Dec 02 20:11:51 functional-536475 kubelet[4328]: E1202 20:11:51.234840    4328 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fpjpw" podUID="43fa9860-79c4-42f8-8e1a-d4fcb75d7aa7"
	Dec 02 20:11:54 functional-536475 kubelet[4328]: E1202 20:11:54.235257    4328 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-fvwcb" podUID="8a624459-3c37-468a-aa43-2c66dde94dd4"
	Dec 02 20:12:03 functional-536475 kubelet[4328]: E1202 20:12:03.235329    4328 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fpjpw" podUID="43fa9860-79c4-42f8-8e1a-d4fcb75d7aa7"
	Dec 02 20:12:08 functional-536475 kubelet[4328]: E1202 20:12:08.237585    4328 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-fvwcb" podUID="8a624459-3c37-468a-aa43-2c66dde94dd4"
	Dec 02 20:12:17 functional-536475 kubelet[4328]: E1202 20:12:17.235393    4328 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fpjpw" podUID="43fa9860-79c4-42f8-8e1a-d4fcb75d7aa7"
	Dec 02 20:12:22 functional-536475 kubelet[4328]: E1202 20:12:22.235088    4328 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-fvwcb" podUID="8a624459-3c37-468a-aa43-2c66dde94dd4"
	Dec 02 20:12:31 functional-536475 kubelet[4328]: E1202 20:12:31.234959    4328 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fpjpw" podUID="43fa9860-79c4-42f8-8e1a-d4fcb75d7aa7"
	Dec 02 20:12:34 functional-536475 kubelet[4328]: E1202 20:12:34.235869    4328 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-fvwcb" podUID="8a624459-3c37-468a-aa43-2c66dde94dd4"
	Dec 02 20:12:42 functional-536475 kubelet[4328]: E1202 20:12:42.235301    4328 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fpjpw" podUID="43fa9860-79c4-42f8-8e1a-d4fcb75d7aa7"
	Dec 02 20:12:45 functional-536475 kubelet[4328]: E1202 20:12:45.235511    4328 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-fvwcb" podUID="8a624459-3c37-468a-aa43-2c66dde94dd4"
	Dec 02 20:12:56 functional-536475 kubelet[4328]: E1202 20:12:56.235334    4328 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fpjpw" podUID="43fa9860-79c4-42f8-8e1a-d4fcb75d7aa7"
	Dec 02 20:13:00 functional-536475 kubelet[4328]: E1202 20:13:00.235513    4328 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-fvwcb" podUID="8a624459-3c37-468a-aa43-2c66dde94dd4"
	Dec 02 20:13:07 functional-536475 kubelet[4328]: E1202 20:13:07.234955    4328 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fpjpw" podUID="43fa9860-79c4-42f8-8e1a-d4fcb75d7aa7"
	
	
	==> kubernetes-dashboard [7e3168c9d862a0a7619ac8b93028906bfcd380ace1b8f2d6ea1928d28ab2a9a5] <==
	2025/12/02 20:03:26 Using namespace: kubernetes-dashboard
	2025/12/02 20:03:26 Using in-cluster config to connect to apiserver
	2025/12/02 20:03:26 Using secret token for csrf signing
	2025/12/02 20:03:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/02 20:03:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/02 20:03:26 Successful initial request to the apiserver, version: v1.34.2
	2025/12/02 20:03:26 Generating JWE encryption key
	2025/12/02 20:03:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/02 20:03:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/02 20:03:26 Initializing JWE encryption key from synchronized object
	2025/12/02 20:03:26 Creating in-cluster Sidecar client
	2025/12/02 20:03:26 Successful request to sidecar
	2025/12/02 20:03:26 Serving insecurely on HTTP port: 9090
	2025/12/02 20:03:26 Starting overwatch
	
	
	==> storage-provisioner [26d8d1fc15e04d18a759cef17cb9c65875f75efa16a978a11a0e83a057f803e4] <==
	W1202 20:12:42.825950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:12:44.829893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:12:44.834247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:12:46.838137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:12:46.843897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:12:48.847628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:12:48.851760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:12:50.855278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:12:50.859528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:12:52.863048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:12:52.868210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:12:54.871747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:12:54.877194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:12:56.881654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:12:56.886921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:12:58.890413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:12:58.895910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:13:00.898853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:13:00.904369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:13:02.907628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:13:02.911810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:13:04.915016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:13:04.920273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:13:06.923958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:13:06.929421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c7c54e1c1555a232d5b9a727e806233f22ae612c54d719d989210575e76bb02e] <==
	I1202 20:01:37.095104       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-536475_4cda0a2b-423f-42f4-bbba-e502ba837363!
	W1202 20:01:39.003980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:01:39.009380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:01:41.013109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:01:41.017492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:01:43.021193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:01:43.027291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:01:45.030275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:01:45.034356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:01:47.037806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:01:47.041893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:01:49.045323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:01:49.049997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:01:51.053229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:01:51.057564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:01:53.061009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:01:53.065436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:01:55.068704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:01:55.073037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:01:57.077124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:01:57.081614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:01:59.085093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:01:59.090196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:02:01.094558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:02:01.099497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-536475 -n functional-536475
helpers_test.go:269: (dbg) Run:  kubectl --context functional-536475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-fvwcb hello-node-connect-7d85dfc575-fpjpw
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-536475 describe pod busybox-mount hello-node-75c85bcc94-fvwcb hello-node-connect-7d85dfc575-fpjpw
helpers_test.go:290: (dbg) kubectl --context functional-536475 describe pod busybox-mount hello-node-75c85bcc94-fvwcb hello-node-connect-7d85dfc575-fpjpw:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-536475/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 20:03:08 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://b422aeb0fab91be9bae5f5a3f2900716089122ece27c47de3cea22be946dff07
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 02 Dec 2025 20:03:11 +0000
	      Finished:     Tue, 02 Dec 2025 20:03:11 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gdb4d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-gdb4d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m58s  default-scheduler  Successfully assigned default/busybox-mount to functional-536475
	  Normal  Pulling    9m58s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m56s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.425s (2.425s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m56s  kubelet            Created container: mount-munger
	  Normal  Started    9m56s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-fvwcb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-536475/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 20:02:48 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hf87w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hf87w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/hello-node-75c85bcc94-fvwcb to functional-536475
	  Normal   Pulling    7m8s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m8s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m8s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    7s (x43 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     7s (x43 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-fpjpw
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-536475/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 20:03:04 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-shmnx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-shmnx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-fpjpw to functional-536475
	  Normal   Pulling    7m1s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m1s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m1s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    0s (x43 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     0s (x43 over 10m)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 image load --daemon kicbase/echo-server:functional-536475 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-536475" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-536475 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-536475 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-fvwcb" [8a624459-3c37-468a-aa43-2c66dde94dd4] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-536475 -n functional-536475
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-02 20:12:49.328038418 +0000 UTC m=+1117.877821037
functional_test.go:1460: (dbg) Run:  kubectl --context functional-536475 describe po hello-node-75c85bcc94-fvwcb -n default
functional_test.go:1460: (dbg) kubectl --context functional-536475 describe po hello-node-75c85bcc94-fvwcb -n default:
Name:             hello-node-75c85bcc94-fvwcb
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-536475/192.168.49.2
Start Time:       Tue, 02 Dec 2025 20:02:48 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hf87w (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-hf87w:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-fvwcb to functional-536475
Normal   Pulling    6m50s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m50s (x5 over 9m55s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m50s (x5 over 9m55s)   kubelet            Error: ErrImagePull
Warning  Failed     4m51s (x20 over 9m54s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m39s (x21 over 9m54s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-536475 logs hello-node-75c85bcc94-fvwcb -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-536475 logs hello-node-75c85bcc94-fvwcb -n default: exit status 1 (76.229634ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-fvwcb" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-536475 logs hello-node-75c85bcc94-fvwcb -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 image load --daemon kicbase/echo-server:functional-536475 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-536475" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-536475
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 image load --daemon kicbase/echo-server:functional-536475 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-536475 image ls: (2.322295184s)
functional_test.go:461: expected "kicbase/echo-server:functional-536475" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 image save kicbase/echo-server:functional-536475 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1202 20:02:55.488827  446426 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:02:55.489171  446426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:02:55.489182  446426 out.go:374] Setting ErrFile to fd 2...
	I1202 20:02:55.489187  446426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:02:55.489374  446426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:02:55.489992  446426 config.go:182] Loaded profile config "functional-536475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:02:55.490101  446426 config.go:182] Loaded profile config "functional-536475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:02:55.490543  446426 cli_runner.go:164] Run: docker container inspect functional-536475 --format={{.State.Status}}
	I1202 20:02:55.511033  446426 ssh_runner.go:195] Run: systemctl --version
	I1202 20:02:55.511131  446426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-536475
	I1202 20:02:55.531146  446426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/functional-536475/id_rsa Username:docker}
	I1202 20:02:55.635045  446426 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1202 20:02:55.635140  446426 cache_images.go:255] Failed to load cached images for "functional-536475": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1202 20:02:55.635173  446426 cache_images.go:267] failed pushing to: functional-536475

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-536475
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 image save --daemon kicbase/echo-server:functional-536475 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-536475
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-536475: exit status 1 (19.131005ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-536475

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-536475

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-536475 service --namespace=default --https --url hello-node: exit status 115 (571.136863ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30773
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-536475 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-536475 service hello-node --url --format={{.IP}}: exit status 115 (571.63866ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-536475 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-536475 service hello-node --url: exit status 115 (566.300242ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30773
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-536475 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30773
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (603.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-136749 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-136749 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-xg9zf" [4d2d1feb-df33-4691-b00e-ba0c03b26c74] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-136749 -n functional-136749
functional_test.go:1645: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-02 20:25:08.614606986 +0000 UTC m=+1857.164389621
functional_test.go:1645: (dbg) Run:  kubectl --context functional-136749 describe po hello-node-connect-9f67c86d4-xg9zf -n default
functional_test.go:1645: (dbg) kubectl --context functional-136749 describe po hello-node-connect-9f67c86d4-xg9zf -n default:
Name:             hello-node-connect-9f67c86d4-xg9zf
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-136749/192.168.49.2
Start Time:       Tue, 02 Dec 2025 20:15:08 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xbzqk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-xbzqk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-xg9zf to functional-136749
Normal   Pulling    6m55s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m55s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m55s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m57s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m44s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-136749 logs hello-node-connect-9f67c86d4-xg9zf -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-136749 logs hello-node-connect-9f67c86d4-xg9zf -n default: exit status 1 (72.667909ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-xg9zf" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-136749 logs hello-node-connect-9f67c86d4-xg9zf -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-136749 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-9f67c86d4-xg9zf
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-136749/192.168.49.2
Start Time:       Tue, 02 Dec 2025 20:15:08 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xbzqk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-xbzqk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-xg9zf to functional-136749
Normal   Pulling    6m55s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m55s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m55s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m57s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m44s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-136749 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-136749 logs -l app=hello-node-connect: exit status 1 (69.688513ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-xg9zf" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-136749 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-136749 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.97.60.237
IPs:                      10.97.60.237
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32652/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-136749
helpers_test.go:243: (dbg) docker inspect functional-136749:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "00bab378db2a5ed70b9c3cc8fbf81db12e94050ad43da8a52371ce6eaab4c410",
	        "Created": "2025-12-02T20:13:12.557471525Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 457584,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T20:13:12.591703282Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/00bab378db2a5ed70b9c3cc8fbf81db12e94050ad43da8a52371ce6eaab4c410/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/00bab378db2a5ed70b9c3cc8fbf81db12e94050ad43da8a52371ce6eaab4c410/hostname",
	        "HostsPath": "/var/lib/docker/containers/00bab378db2a5ed70b9c3cc8fbf81db12e94050ad43da8a52371ce6eaab4c410/hosts",
	        "LogPath": "/var/lib/docker/containers/00bab378db2a5ed70b9c3cc8fbf81db12e94050ad43da8a52371ce6eaab4c410/00bab378db2a5ed70b9c3cc8fbf81db12e94050ad43da8a52371ce6eaab4c410-json.log",
	        "Name": "/functional-136749",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-136749:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-136749",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "00bab378db2a5ed70b9c3cc8fbf81db12e94050ad43da8a52371ce6eaab4c410",
	                "LowerDir": "/var/lib/docker/overlay2/e4f9c4f7fd8f379cb8cc6f1b102af37ccee0bde8da4f501ecbd54c33d543507a-init/diff:/var/lib/docker/overlay2/49bb44e987885c48600d5ae7ad3c81cad82a00f38070c0460882c5746d9fae59/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e4f9c4f7fd8f379cb8cc6f1b102af37ccee0bde8da4f501ecbd54c33d543507a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e4f9c4f7fd8f379cb8cc6f1b102af37ccee0bde8da4f501ecbd54c33d543507a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e4f9c4f7fd8f379cb8cc6f1b102af37ccee0bde8da4f501ecbd54c33d543507a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-136749",
	                "Source": "/var/lib/docker/volumes/functional-136749/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-136749",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-136749",
	                "name.minikube.sigs.k8s.io": "functional-136749",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b785c8e030bd9568a260c1b879dac269f77d16096a777d23e29b6bdee4832ead",
	            "SandboxKey": "/var/run/docker/netns/b785c8e030bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-136749": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1c57f60df8580663da7459b21ded812540b354e1c1f04baecf709bcfd717237b",
	                    "EndpointID": "9fadd9f190c6136747d2dcfabe4d5274f05f6d79b01bd5f7f008a14947679689",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "f2:43:dd:0a:6a:dc",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-136749",
	                        "00bab378db2a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-136749 -n functional-136749
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-136749 logs -n 25: (1.439204496s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                     ARGS                                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-136749 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo267375188/001:/mount1 --alsologtostderr -v=1           │ functional-136749 │ jenkins │ v1.37.0 │ 02 Dec 25 20:15 UTC │                     │
	│ mount          │ -p functional-136749 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo267375188/001:/mount2 --alsologtostderr -v=1           │ functional-136749 │ jenkins │ v1.37.0 │ 02 Dec 25 20:15 UTC │                     │
	│ ssh            │ functional-136749 ssh findmnt -T /mount1                                                                                                      │ functional-136749 │ jenkins │ v1.37.0 │ 02 Dec 25 20:15 UTC │ 02 Dec 25 20:15 UTC │
	│ ssh            │ functional-136749 ssh findmnt -T /mount2                                                                                                      │ functional-136749 │ jenkins │ v1.37.0 │ 02 Dec 25 20:15 UTC │ 02 Dec 25 20:15 UTC │
	│ ssh            │ functional-136749 ssh findmnt -T /mount3                                                                                                      │ functional-136749 │ jenkins │ v1.37.0 │ 02 Dec 25 20:15 UTC │ 02 Dec 25 20:15 UTC │
	│ mount          │ -p functional-136749 --kill=true                                                                                                              │ functional-136749 │ jenkins │ v1.37.0 │ 02 Dec 25 20:15 UTC │                     │
	│ start          │ -p functional-136749 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-136749 │ jenkins │ v1.37.0 │ 02 Dec 25 20:15 UTC │                     │
	│ start          │ -p functional-136749 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0           │ functional-136749 │ jenkins │ v1.37.0 │ 02 Dec 25 20:15 UTC │                     │
	│ start          │ -p functional-136749 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-136749 │ jenkins │ v1.37.0 │ 02 Dec 25 20:15 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-136749 --alsologtostderr -v=1                                                                                │ functional-136749 │ jenkins │ v1.37.0 │ 02 Dec 25 20:15 UTC │ 02 Dec 25 20:15 UTC │
	│ update-context │ functional-136749 update-context --alsologtostderr -v=2                                                                                       │ functional-136749 │ jenkins │ v1.37.0 │ 02 Dec 25 20:15 UTC │ 02 Dec 25 20:15 UTC │
	│ update-context │ functional-136749 update-context --alsologtostderr -v=2                                                                                       │ functional-136749 │ jenkins │ v1.37.0 │ 02 Dec 25 20:15 UTC │ 02 Dec 25 20:15 UTC │
	│ update-context │ functional-136749 update-context --alsologtostderr -v=2                                                                                       │ functional-136749 │ jenkins │ v1.37.0 │ 02 Dec 25 20:15 UTC │ 02 Dec 25 20:15 UTC │
	│ image          │ functional-136749 image ls --format short --alsologtostderr                                                                                   │ functional-136749 │ jenkins │ v1.37.0 │ 02 Dec 25 20:15 UTC │ 02 Dec 25 20:15 UTC │
	│ image          │ functional-136749 image ls --format yaml --alsologtostderr                                                                                    │ functional-136749 │ jenkins │ v1.37.0 │ 02 Dec 25 20:15 UTC │ 02 Dec 25 20:15 UTC │
	│ ssh            │ functional-136749 ssh pgrep buildkitd                                                                                                         │ functional-136749 │ jenkins │ v1.37.0 │ 02 Dec 25 20:15 UTC │                     │
	│ image          │ functional-136749 image build -t localhost/my-image:functional-136749 testdata/build --alsologtostderr                                        │ functional-136749 │ jenkins │ v1.37.0 │ 02 Dec 25 20:15 UTC │ 02 Dec 25 20:15 UTC │
	│ image          │ functional-136749 image ls --format json --alsologtostderr                                                                                    │ functional-136749 │ jenkins │ v1.37.0 │ 02 Dec 25 20:15 UTC │ 02 Dec 25 20:15 UTC │
	│ image          │ functional-136749 image ls --format table --alsologtostderr                                                                                   │ functional-136749 │ jenkins │ v1.37.0 │ 02 Dec 25 20:15 UTC │ 02 Dec 25 20:15 UTC │
	│ image          │ functional-136749 image ls                                                                                                                    │ functional-136749 │ jenkins │ v1.37.0 │ 02 Dec 25 20:15 UTC │ 02 Dec 25 20:15 UTC │
	│ service        │ functional-136749 service list                                                                                                                │ functional-136749 │ jenkins │ v1.37.0 │ 02 Dec 25 20:25 UTC │ 02 Dec 25 20:25 UTC │
	│ service        │ functional-136749 service list -o json                                                                                                        │ functional-136749 │ jenkins │ v1.37.0 │ 02 Dec 25 20:25 UTC │ 02 Dec 25 20:25 UTC │
	│ service        │ functional-136749 service --namespace=default --https --url hello-node                                                                        │ functional-136749 │ jenkins │ v1.37.0 │ 02 Dec 25 20:25 UTC │                     │
	│ service        │ functional-136749 service hello-node --url --format={{.IP}}                                                                                   │ functional-136749 │ jenkins │ v1.37.0 │ 02 Dec 25 20:25 UTC │                     │
	│ service        │ functional-136749 service hello-node --url                                                                                                    │ functional-136749 │ jenkins │ v1.37.0 │ 02 Dec 25 20:25 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:15:29
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:15:29.380198  473450 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:15:29.380519  473450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:15:29.380530  473450 out.go:374] Setting ErrFile to fd 2...
	I1202 20:15:29.380535  473450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:15:29.380899  473450 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:15:29.381414  473450 out.go:368] Setting JSON to false
	I1202 20:15:29.382444  473450 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7073,"bootTime":1764699456,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:15:29.382514  473450 start.go:143] virtualization: kvm guest
	I1202 20:15:29.384888  473450 out.go:179] * [functional-136749] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1202 20:15:29.386905  473450 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:15:29.386917  473450 notify.go:221] Checking for updates...
	I1202 20:15:29.389461  473450 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:15:29.391013  473450 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:15:29.392707  473450 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 20:15:29.394027  473450 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:15:29.395218  473450 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:15:29.396773  473450 config.go:182] Loaded profile config "functional-136749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:15:29.397381  473450 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:15:29.424926  473450 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 20:15:29.425119  473450 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:15:29.480277  473450 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 20:15:29.470524655 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:15:29.480396  473450 docker.go:319] overlay module found
	I1202 20:15:29.482324  473450 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1202 20:15:29.483865  473450 start.go:309] selected driver: docker
	I1202 20:15:29.483888  473450 start.go:927] validating driver "docker" against &{Name:functional-136749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-136749 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:15:29.483992  473450 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:15:29.486054  473450 out.go:203] 
	W1202 20:15:29.487629  473450 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1202 20:15:29.489028  473450 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 20:15:40 functional-136749 crio[4577]: time="2025-12-02T20:15:40.504012056Z" level=info msg="Removed pod sandbox: c84f00844b9826d0e73be7330466fbf3a7f615c896b59ced6f5093d7244ca32f" id=5186a00a-d6a9-4125-b64b-0f85a03e2122 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 20:15:40 functional-136749 crio[4577]: time="2025-12-02T20:15:40.50445984Z" level=info msg="Stopping pod sandbox: 8f044287e4bde379a3bff9d05d826b7af9e2923efb9cbeb147d21c9a03a50d97" id=4f9665ce-65d2-46ef-a28f-b3c8053827a2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 20:15:40 functional-136749 crio[4577]: time="2025-12-02T20:15:40.504515908Z" level=info msg="Stopped pod sandbox (already stopped): 8f044287e4bde379a3bff9d05d826b7af9e2923efb9cbeb147d21c9a03a50d97" id=4f9665ce-65d2-46ef-a28f-b3c8053827a2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 20:15:40 functional-136749 crio[4577]: time="2025-12-02T20:15:40.504878087Z" level=info msg="Removing pod sandbox: 8f044287e4bde379a3bff9d05d826b7af9e2923efb9cbeb147d21c9a03a50d97" id=5d6da1b8-9500-4cd1-af8f-26abc73ea7c4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 20:15:40 functional-136749 crio[4577]: time="2025-12-02T20:15:40.507454806Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 20:15:40 functional-136749 crio[4577]: time="2025-12-02T20:15:40.50752064Z" level=info msg="Removed pod sandbox: 8f044287e4bde379a3bff9d05d826b7af9e2923efb9cbeb147d21c9a03a50d97" id=5d6da1b8-9500-4cd1-af8f-26abc73ea7c4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 20:15:41 functional-136749 crio[4577]: time="2025-12-02T20:15:41.626938665Z" level=info msg="Pulled image: docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a" id=a616f70d-f5ab-46ce-99b0-d7fcfc787210 name=/runtime.v1.ImageService/PullImage
	Dec 02 20:15:41 functional-136749 crio[4577]: time="2025-12-02T20:15:41.62780154Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=ecc24189-ca90-4578-9c21-fefe6b4acb41 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:15:41 functional-136749 crio[4577]: time="2025-12-02T20:15:41.630366866Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=e4c993f1-fa9f-481e-b87b-bad8b2118802 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:15:41 functional-136749 crio[4577]: time="2025-12-02T20:15:41.634720587Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5565989548-8t8m7/dashboard-metrics-scraper" id=f9407956-aec6-4f40-ae42-43be7bc2f415 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:15:41 functional-136749 crio[4577]: time="2025-12-02T20:15:41.634879143Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:15:41 functional-136749 crio[4577]: time="2025-12-02T20:15:41.640395386Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:15:41 functional-136749 crio[4577]: time="2025-12-02T20:15:41.640645194Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/536e76569398fd5a16ca3c04c5d209ae095aa04311875b8a6b7815a5de0b5571/merged/etc/group: no such file or directory"
	Dec 02 20:15:41 functional-136749 crio[4577]: time="2025-12-02T20:15:41.641128249Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:15:41 functional-136749 crio[4577]: time="2025-12-02T20:15:41.666591017Z" level=info msg="Created container baa8aec40e719801757a5e4a941e5b4040bd3d0401dcb35a3c8cf2d9a113ae9b: kubernetes-dashboard/dashboard-metrics-scraper-5565989548-8t8m7/dashboard-metrics-scraper" id=f9407956-aec6-4f40-ae42-43be7bc2f415 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:15:41 functional-136749 crio[4577]: time="2025-12-02T20:15:41.667374048Z" level=info msg="Starting container: baa8aec40e719801757a5e4a941e5b4040bd3d0401dcb35a3c8cf2d9a113ae9b" id=90617385-06ba-40d5-81ff-938dc4e7f14c name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:15:41 functional-136749 crio[4577]: time="2025-12-02T20:15:41.669682713Z" level=info msg="Started container" PID=8494 containerID=baa8aec40e719801757a5e4a941e5b4040bd3d0401dcb35a3c8cf2d9a113ae9b description=kubernetes-dashboard/dashboard-metrics-scraper-5565989548-8t8m7/dashboard-metrics-scraper id=90617385-06ba-40d5-81ff-938dc4e7f14c name=/runtime.v1.RuntimeService/StartContainer sandboxID=c238761bda745e9b231ae99e038c283b2e314a49fd784141d3db22a12bf7c1e5
	Dec 02 20:15:43 functional-136749 crio[4577]: time="2025-12-02T20:15:43.49412185Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=40af2662-80cb-4787-b9f0-f856066ba225 name=/runtime.v1.ImageService/PullImage
	Dec 02 20:15:52 functional-136749 crio[4577]: time="2025-12-02T20:15:52.494722189Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e2d82ac8-504c-48db-9915-bbd81bab1ac3 name=/runtime.v1.ImageService/PullImage
	Dec 02 20:16:25 functional-136749 crio[4577]: time="2025-12-02T20:16:25.495113299Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e8ab0e16-d2da-450b-9914-5dccf4c656c7 name=/runtime.v1.ImageService/PullImage
	Dec 02 20:16:45 functional-136749 crio[4577]: time="2025-12-02T20:16:45.494229457Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=338d2ace-52ef-47a4-842c-0a93d033f345 name=/runtime.v1.ImageService/PullImage
	Dec 02 20:17:54 functional-136749 crio[4577]: time="2025-12-02T20:17:54.494627633Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=32dadf5e-4061-4a66-b889-f1ae6f01391b name=/runtime.v1.ImageService/PullImage
	Dec 02 20:18:13 functional-136749 crio[4577]: time="2025-12-02T20:18:13.494710366Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=9fdd19af-ea81-4ed0-a698-5af6bc12cf3c name=/runtime.v1.ImageService/PullImage
	Dec 02 20:20:48 functional-136749 crio[4577]: time="2025-12-02T20:20:48.494731081Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=894efe86-5370-498f-b033-d87e03ba0e59 name=/runtime.v1.ImageService/PullImage
	Dec 02 20:21:03 functional-136749 crio[4577]: time="2025-12-02T20:21:03.494900407Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=092ff1aa-119e-4e7f-9ebf-76dd2562804a name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	baa8aec40e719       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   c238761bda745       dashboard-metrics-scraper-5565989548-8t8m7   kubernetes-dashboard
	d2297b9fcd9a8       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   35c5c6e6f3955       kubernetes-dashboard-b84665fb8-c4twm         kubernetes-dashboard
	bef61d253bfe9       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   d2b4f451ee164       mysql-844cf969f6-gdjq7                       default
	682616cf5ad2a       docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541                  9 minutes ago       Running             myfrontend                  0                   33010503874f3       sp-pod                                       default
	3ac6049adb119       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   a33769daba8c8       busybox-mount                                default
	27f7cd3da1af7       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  10 minutes ago      Running             nginx                       0                   8f6f3cb596bd1       nginx-svc                                    default
	4f6d15d8d95cd       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                                 10 minutes ago      Running             kube-apiserver              0                   61a8e996c9dc9       kube-apiserver-functional-136749             kube-system
	018638eafb638       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                                 10 minutes ago      Running             kube-controller-manager     2                   d6c098c653051       kube-controller-manager-functional-136749    kube-system
	93db784cbe964       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                                 10 minutes ago      Exited              kube-controller-manager     1                   d6c098c653051       kube-controller-manager-functional-136749    kube-system
	5f153a2a71904       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                                 10 minutes ago      Running             kube-scheduler              1                   00f418928d671       kube-scheduler-functional-136749             kube-system
	11be5d7b42462       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 10 minutes ago      Running             etcd                        1                   cc32659c50a95       etcd-functional-136749                       kube-system
	36a9648a5f192       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                                 10 minutes ago      Running             kube-proxy                  1                   45414e8fcaac3       kube-proxy-9f2fz                             kube-system
	938430627e274       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         1                   a3a577781050a       storage-provisioner                          kube-system
	7e32eeeebaf88       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                 10 minutes ago      Running             coredns                     1                   9e326f53a2aff       coredns-7d764666f9-9w5r9                     kube-system
	cd27419582f91       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 1                   e21834e8c4199       kindnet-vfs2x                                kube-system
	4cfcbc43e76a4       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                 11 minutes ago      Exited              coredns                     0                   9e326f53a2aff       coredns-7d764666f9-9w5r9                     kube-system
	2d303907410c3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   a3a577781050a       storage-provisioner                          kube-system
	aec3da9f54bf9       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11               11 minutes ago      Exited              kindnet-cni                 0                   e21834e8c4199       kindnet-vfs2x                                kube-system
	4faf0ea243e7d       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                                 11 minutes ago      Exited              kube-proxy                  0                   45414e8fcaac3       kube-proxy-9f2fz                             kube-system
	db8ea5922d73e       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 11 minutes ago      Exited              etcd                        0                   cc32659c50a95       etcd-functional-136749                       kube-system
	64c2a1ddd1be6       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                                 11 minutes ago      Exited              kube-scheduler              0                   00f418928d671       kube-scheduler-functional-136749             kube-system
	
	
	==> coredns [4cfcbc43e76a40ffa8b4240b3ec7761e8f1da39351beab0e54d6814f1a4ead9d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:45301 - 14617 "HINFO IN 1142020625694387166.4951436862207650895. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021562057s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7e32eeeebaf8834899157412847cb73fca6803b70d52a699760f557395bfb136] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:44803 - 457 "HINFO IN 4260484770550811361.8959646962939743781. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02820628s
	
	
	==> describe nodes <==
	Name:               functional-136749
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-136749
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=functional-136749
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T20_13_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 20:13:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-136749
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:25:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:24:03 +0000   Tue, 02 Dec 2025 20:13:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:24:03 +0000   Tue, 02 Dec 2025 20:13:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:24:03 +0000   Tue, 02 Dec 2025 20:13:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:24:03 +0000   Tue, 02 Dec 2025 20:13:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-136749
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                a092f542-a6eb-46f0-934b-000a85cf465b
	  Boot ID:                    9dd5456f-e394-4b4b-9458-48d26faf507e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-qj9rw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-9f67c86d4-xg9zf            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-844cf969f6-gdjq7                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m41s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m50s
	  kube-system                 coredns-7d764666f9-9w5r9                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-136749                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-vfs2x                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-136749              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-136749     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-9f2fz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-136749              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-8t8m7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-c4twm          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  11m   node-controller  Node functional-136749 event: Registered Node functional-136749 in Controller
	  Normal  RegisteredNode  10m   node-controller  Node functional-136749 event: Registered Node functional-136749 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e f4 c0 f2 56 fb 08 06
	[  +0.000355] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 95 9a 02 fc fb 08 06
	[Dec 2 19:57] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000013] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +1.020139] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +1.023921] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +1.023940] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +1.023881] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +2.047855] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +4.031797] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +8.191553] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[Dec 2 19:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[ +32.254238] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	
	
	==> etcd [11be5d7b42462527543f17bde7442ceccccf6921f271cf966cc0253a61cb601e] <==
	{"level":"warn","ts":"2025-12-02T20:14:41.635628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:14:41.642347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:14:41.648782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:14:41.655600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:14:41.662381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:14:41.668903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:14:41.675287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:14:41.682138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:14:41.688748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:14:41.695484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:14:41.702168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:14:41.717354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:14:41.723845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:14:41.730782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:14:41.737716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:14:41.745434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:14:41.753160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:14:41.760326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:14:41.774411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:14:41.780726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:14:41.787325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:14:41.842782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54142","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T20:24:41.347147Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1136}
	{"level":"info","ts":"2025-12-02T20:24:41.368235Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1136,"took":"20.719192ms","hash":902432677,"current-db-size-bytes":3510272,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1576960,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-12-02T20:24:41.368297Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":902432677,"revision":1136,"compact-revision":-1}
	
	
	==> etcd [db8ea5922d73e720c463396ddedfd7a25abd9f5718c938d6f11f0358704926e7] <==
	{"level":"warn","ts":"2025-12-02T20:13:33.135017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:13:33.141865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:13:33.156575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:13:33.207608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:13:34.759504Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.480768ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/edit\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-02T20:13:34.759596Z","caller":"traceutil/trace.go:172","msg":"trace[521950638] range","detail":"{range_begin:/registry/clusterroles/edit; range_end:; response_count:0; response_revision:68; }","duration":"129.614187ms","start":"2025-12-02T20:13:34.629966Z","end":"2025-12-02T20:13:34.759581Z","steps":["trace[521950638] 'agreement among raft nodes before linearized reading'  (duration: 60.53878ms)","trace[521950638] 'range keys from in-memory index tree'  (duration: 68.911157ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T20:13:34.759719Z","caller":"traceutil/trace.go:172","msg":"trace[632551012] transaction","detail":"{read_only:false; response_revision:69; number_of_response:1; }","duration":"130.971924ms","start":"2025-12-02T20:13:34.628726Z","end":"2025-12-02T20:13:34.759698Z","steps":["trace[632551012] 'process raft request'  (duration: 61.767895ms)","trace[632551012] 'compare'  (duration: 68.981377ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T20:14:21.720683Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-02T20:14:21.720772Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-136749","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-02T20:14:21.720993Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T20:14:28.721974Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-12-02T20:14:28.722194Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T20:14:28.722190Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T20:14:28.722300Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-02T20:14:28.722287Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-12-02T20:14:28.722317Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T20:14:28.722319Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-12-02T20:14:28.722331Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T20:14:28.722347Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-02T20:14:28.722355Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-12-02T20:14:28.722092Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T20:14:28.725987Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-02T20:14:28.726056Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T20:14:28.726106Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-02T20:14:28.726113Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-136749","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 20:25:10 up  2:07,  0 user,  load average: 0.80, 0.49, 0.74
	Linux functional-136749 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [aec3da9f54bf91d04153ecf975dbca9ee69dd1a22bbd7ea2442ac40b70350251] <==
	I1202 20:13:44.316711       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 20:13:44.317041       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1202 20:13:44.317230       1 main.go:148] setting mtu 1500 for CNI 
	I1202 20:13:44.317251       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 20:13:44.317273       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T20:13:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 20:13:44.521429       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 20:13:44.521489       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 20:13:44.521507       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 20:13:44.522061       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 20:13:44.922638       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 20:13:44.922668       1 metrics.go:72] Registering metrics
	I1202 20:13:44.922757       1 controller.go:711] "Syncing nftables rules"
	I1202 20:13:54.521434       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:13:54.521491       1 main.go:301] handling current node
	I1202 20:14:04.525349       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:14:04.525408       1 main.go:301] handling current node
	I1202 20:14:14.521329       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:14:14.521378       1 main.go:301] handling current node
	
	
	==> kindnet [cd27419582f91b9a4a999c4c2ec789a1eea6b967881fa9d34fec443da02d2876] <==
	I1202 20:23:02.064128       1 main.go:301] handling current node
	I1202 20:23:12.062490       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:23:12.062552       1 main.go:301] handling current node
	I1202 20:23:22.071056       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:23:22.071116       1 main.go:301] handling current node
	I1202 20:23:32.065290       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:23:32.065325       1 main.go:301] handling current node
	I1202 20:23:42.062222       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:23:42.062263       1 main.go:301] handling current node
	I1202 20:23:52.069254       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:23:52.069294       1 main.go:301] handling current node
	I1202 20:24:02.062192       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:24:02.062237       1 main.go:301] handling current node
	I1202 20:24:12.062210       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:24:12.062246       1 main.go:301] handling current node
	I1202 20:24:22.071558       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:24:22.071595       1 main.go:301] handling current node
	I1202 20:24:32.071038       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:24:32.071111       1 main.go:301] handling current node
	I1202 20:24:42.062242       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:24:42.062276       1 main.go:301] handling current node
	I1202 20:24:52.071289       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:24:52.071325       1 main.go:301] handling current node
	I1202 20:25:02.065807       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:25:02.065845       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4f6d15d8d95cd0edf7faafed9bc94a2e50fbb6b25cbe39c28247269ab34af2ba] <==
	I1202 20:14:42.331634       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 20:14:42.337749       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 20:14:42.532487       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 20:14:42.532584       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 20:14:43.190602       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	W1202 20:14:43.397109       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1202 20:14:43.403557       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 20:14:43.859748       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 20:14:43.953570       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 20:14:44.012206       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 20:14:44.018835       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 20:14:50.374468       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 20:14:57.818013       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.204.100"}
	I1202 20:15:02.833661       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.123.145"}
	I1202 20:15:04.945532       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.63.52"}
	I1202 20:15:08.263245       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.60.237"}
	E1202 20:15:19.898159       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:41122: use of closed network connection
	E1202 20:15:28.878771       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:41158: use of closed network connection
	I1202 20:15:29.014477       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.120.80"}
	I1202 20:15:30.378216       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 20:15:30.498397       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.99.206"}
	I1202 20:15:30.509673       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.120.35"}
	E1202 20:15:43.149963       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:53186: use of closed network connection
	E1202 20:15:44.565401       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:53206: use of closed network connection
	I1202 20:24:42.229650       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [018638eafb638a586703b511b6aac6570aa834f9388b995fd73cd05c11f48dfb] <==
	I1202 20:14:45.422254       1 shared_informer.go:377] "Caches are synced"
	I1202 20:14:45.422307       1 shared_informer.go:377] "Caches are synced"
	I1202 20:14:45.422342       1 shared_informer.go:377] "Caches are synced"
	I1202 20:14:45.422351       1 shared_informer.go:377] "Caches are synced"
	I1202 20:14:45.422367       1 shared_informer.go:377] "Caches are synced"
	I1202 20:14:45.422396       1 shared_informer.go:377] "Caches are synced"
	I1202 20:14:45.422457       1 shared_informer.go:377] "Caches are synced"
	I1202 20:14:45.422468       1 shared_informer.go:377] "Caches are synced"
	I1202 20:14:45.423220       1 shared_informer.go:377] "Caches are synced"
	I1202 20:14:45.423364       1 shared_informer.go:377] "Caches are synced"
	I1202 20:14:45.423385       1 shared_informer.go:377] "Caches are synced"
	I1202 20:14:45.423389       1 shared_informer.go:377] "Caches are synced"
	I1202 20:14:45.423766       1 shared_informer.go:377] "Caches are synced"
	I1202 20:14:45.431758       1 shared_informer.go:377] "Caches are synced"
	I1202 20:14:45.433187       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 20:14:45.522335       1 shared_informer.go:377] "Caches are synced"
	I1202 20:14:45.522359       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1202 20:14:45.522364       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1202 20:14:45.533781       1 shared_informer.go:377] "Caches are synced"
	E1202 20:15:30.429889       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 20:15:30.433728       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 20:15:30.437244       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 20:15:30.439225       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 20:15:30.443967       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 20:15:30.447779       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [93db784cbe964f987d41fd3d515a95eca6b37e5ddba7670c3f2385730e730a24] <==
	I1202 20:14:29.860947       1 serving.go:386] Generated self-signed cert in-memory
	I1202 20:14:29.867723       1 controllermanager.go:189] "Starting" version="v1.35.0-beta.0"
	I1202 20:14:29.867745       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:14:29.869529       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1202 20:14:29.869528       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1202 20:14:29.869618       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1202 20:14:29.869680       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1202 20:14:39.878196       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [36a9648a5f1924b587b0d8f441e1b20b9c8d28fe2e9297c99e045044e7cd470b] <==
	I1202 20:14:22.717038       1 server_linux.go:53] "Using iptables proxy"
	I1202 20:14:22.784227       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 20:14:31.384625       1 shared_informer.go:377] "Caches are synced"
	I1202 20:14:31.384685       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 20:14:31.384815       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 20:14:31.405279       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 20:14:31.405343       1 server_linux.go:136] "Using iptables Proxier"
	I1202 20:14:31.411220       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 20:14:31.411641       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 20:14:31.411682       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:14:31.412968       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 20:14:31.412991       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 20:14:31.413040       1 config.go:106] "Starting endpoint slice config controller"
	I1202 20:14:31.413047       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 20:14:31.413048       1 config.go:200] "Starting service config controller"
	I1202 20:14:31.413078       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 20:14:31.413176       1 config.go:309] "Starting node config controller"
	I1202 20:14:31.413204       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 20:14:31.413212       1 shared_informer.go:356] "Caches are synced" controller="node config"
	E1202 20:14:31.413715       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1202 20:14:42.233747       1 event_broadcaster.go:270] "Server rejected event (will not retry!)" err="events.events.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot create resource \"events\" in API group \"events.k8s.io\" in the namespace \"default\"" event="&Event{ObjectMeta:{functional-136749.187d7f2f99692232  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},EventTime:2025-12-02 20:14:31.413095601 +0000 UTC m=+8.727422826,Series:nil,ReportingController:kube-proxy,ReportingInstance:kube-proxy-functional-136749,Action:StartKubeProxy,Reason:Starting,Regarding:{Node  functional-136749  v1  },Related:nil,Note:,Type:Normal,DeprecatedSource:{ },DeprecatedFirstTimestamp:0001-01-01 00:00:00 +0000 UTC,DeprecatedLastTimestamp:0001-01-01 00:00:00 +0000 UTC,DeprecatedCount:0,}"
	I1202 20:14:45.413365       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 20:14:46.713424       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 20:14:47.913313       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [4faf0ea243e7d35f1d93add2206a826779327d8e2173b0f42d97feaa546e4dac] <==
	I1202 20:13:41.899718       1 server_linux.go:53] "Using iptables proxy"
	I1202 20:13:41.971674       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 20:13:42.072048       1 shared_informer.go:377] "Caches are synced"
	I1202 20:13:42.072123       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 20:13:42.072248       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 20:13:42.094759       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 20:13:42.094828       1 server_linux.go:136] "Using iptables Proxier"
	I1202 20:13:42.101440       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 20:13:42.101954       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 20:13:42.101978       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:13:42.104062       1 config.go:200] "Starting service config controller"
	I1202 20:13:42.104277       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 20:13:42.104136       1 config.go:106] "Starting endpoint slice config controller"
	I1202 20:13:42.104489       1 config.go:309] "Starting node config controller"
	I1202 20:13:42.104508       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 20:13:42.104516       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 20:13:42.104551       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 20:13:42.104769       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 20:13:42.104810       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 20:13:42.205143       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 20:13:42.205181       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 20:13:42.205196       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [5f153a2a71904d713143ed321c9eccc282ac2d3f9bc236f8f76a85c09577447c] <==
	I1202 20:14:30.103859       1 serving.go:386] Generated self-signed cert in-memory
	I1202 20:14:30.577666       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1202 20:14:30.577693       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:14:30.581694       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1202 20:14:30.581707       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:14:30.581729       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 20:14:30.581732       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 20:14:30.581737       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1202 20:14:30.581761       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 20:14:30.581843       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 20:14:30.581879       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 20:14:30.782746       1 shared_informer.go:377] "Caches are synced"
	I1202 20:14:30.782769       1 shared_informer.go:377] "Caches are synced"
	I1202 20:14:30.782900       1 shared_informer.go:377] "Caches are synced"
	E1202 20:14:42.210929       1 reflector.go:204] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1202 20:14:42.223107       1 reflector.go:204] "Failed to watch" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1202 20:14:42.225605       1 reflector.go:204] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	
	
	==> kube-scheduler [64c2a1ddd1be6130d944380509b1c4d346ccf6dd0f46057987d92f8539d0edd3] <==
	E1202 20:13:34.857863       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1202 20:13:34.859055       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1202 20:13:34.877459       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 20:13:34.878487       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1202 20:13:34.957046       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1202 20:13:34.958188       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1202 20:13:34.994435       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 20:13:34.995443       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1202 20:13:35.040036       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1202 20:13:35.041142       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1202 20:13:35.068536       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 20:13:35.069599       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1202 20:13:35.101875       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1202 20:13:35.103022       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1202 20:13:35.105194       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1202 20:13:35.106173       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1202 20:13:35.110349       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1202 20:13:35.111320       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	I1202 20:13:36.824367       1 shared_informer.go:377] "Caches are synced"
	I1202 20:14:28.942136       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1202 20:14:28.942296       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:14:28.942529       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1202 20:14:28.942560       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1202 20:14:28.942565       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1202 20:14:28.942585       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 02 20:23:26 functional-136749 kubelet[5229]: E1202 20:23:26.494550    5229 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-xg9zf" podUID="4d2d1feb-df33-4691-b00e-ba0c03b26c74"
	Dec 02 20:23:31 functional-136749 kubelet[5229]: E1202 20:23:31.493774    5229 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-qj9rw" podUID="470e3549-18e7-4af4-8fed-e1b2280b47cd"
	Dec 02 20:23:39 functional-136749 kubelet[5229]: E1202 20:23:39.493835    5229 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-9w5r9" containerName="coredns"
	Dec 02 20:23:41 functional-136749 kubelet[5229]: E1202 20:23:41.494347    5229 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-xg9zf" podUID="4d2d1feb-df33-4691-b00e-ba0c03b26c74"
	Dec 02 20:23:43 functional-136749 kubelet[5229]: E1202 20:23:43.494423    5229 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-qj9rw" podUID="470e3549-18e7-4af4-8fed-e1b2280b47cd"
	Dec 02 20:23:46 functional-136749 kubelet[5229]: E1202 20:23:46.493858    5229 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-8t8m7" containerName="dashboard-metrics-scraper"
	Dec 02 20:23:49 functional-136749 kubelet[5229]: E1202 20:23:49.493918    5229 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-136749" containerName="kube-scheduler"
	Dec 02 20:23:55 functional-136749 kubelet[5229]: E1202 20:23:55.493880    5229 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-136749" containerName="kube-apiserver"
	Dec 02 20:23:56 functional-136749 kubelet[5229]: E1202 20:23:56.494093    5229 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-xg9zf" podUID="4d2d1feb-df33-4691-b00e-ba0c03b26c74"
	Dec 02 20:23:57 functional-136749 kubelet[5229]: E1202 20:23:57.494170    5229 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-qj9rw" podUID="470e3549-18e7-4af4-8fed-e1b2280b47cd"
	Dec 02 20:24:08 functional-136749 kubelet[5229]: E1202 20:24:08.494097    5229 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-136749" containerName="kube-controller-manager"
	Dec 02 20:24:09 functional-136749 kubelet[5229]: E1202 20:24:09.493960    5229 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-qj9rw" podUID="470e3549-18e7-4af4-8fed-e1b2280b47cd"
	Dec 02 20:24:11 functional-136749 kubelet[5229]: E1202 20:24:11.493570    5229 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-xg9zf" podUID="4d2d1feb-df33-4691-b00e-ba0c03b26c74"
	Dec 02 20:24:21 functional-136749 kubelet[5229]: E1202 20:24:21.493675    5229 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-qj9rw" podUID="470e3549-18e7-4af4-8fed-e1b2280b47cd"
	Dec 02 20:24:24 functional-136749 kubelet[5229]: E1202 20:24:24.494157    5229 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-xg9zf" podUID="4d2d1feb-df33-4691-b00e-ba0c03b26c74"
	Dec 02 20:24:27 functional-136749 kubelet[5229]: E1202 20:24:27.493808    5229 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-c4twm" containerName="kubernetes-dashboard"
	Dec 02 20:24:33 functional-136749 kubelet[5229]: E1202 20:24:33.493265    5229 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-136749" containerName="etcd"
	Dec 02 20:24:35 functional-136749 kubelet[5229]: E1202 20:24:35.493798    5229 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-qj9rw" podUID="470e3549-18e7-4af4-8fed-e1b2280b47cd"
	Dec 02 20:24:36 functional-136749 kubelet[5229]: E1202 20:24:36.493790    5229 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-xg9zf" podUID="4d2d1feb-df33-4691-b00e-ba0c03b26c74"
	Dec 02 20:24:46 functional-136749 kubelet[5229]: E1202 20:24:46.494717    5229 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-qj9rw" podUID="470e3549-18e7-4af4-8fed-e1b2280b47cd"
	Dec 02 20:24:48 functional-136749 kubelet[5229]: E1202 20:24:48.495045    5229 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-xg9zf" podUID="4d2d1feb-df33-4691-b00e-ba0c03b26c74"
	Dec 02 20:24:56 functional-136749 kubelet[5229]: E1202 20:24:56.494179    5229 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-9w5r9" containerName="coredns"
	Dec 02 20:24:57 functional-136749 kubelet[5229]: E1202 20:24:57.493554    5229 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-qj9rw" podUID="470e3549-18e7-4af4-8fed-e1b2280b47cd"
	Dec 02 20:25:01 functional-136749 kubelet[5229]: E1202 20:25:01.494327    5229 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-xg9zf" podUID="4d2d1feb-df33-4691-b00e-ba0c03b26c74"
	Dec 02 20:25:03 functional-136749 kubelet[5229]: E1202 20:25:03.493285    5229 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-8t8m7" containerName="dashboard-metrics-scraper"
	
	
	==> kubernetes-dashboard [d2297b9fcd9a8995fb187077837795c2925faf88ec7a2321f3d6f8f0776d5030] <==
	2025/12/02 20:15:39 Using namespace: kubernetes-dashboard
	2025/12/02 20:15:39 Using in-cluster config to connect to apiserver
	2025/12/02 20:15:39 Using secret token for csrf signing
	2025/12/02 20:15:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/02 20:15:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/02 20:15:39 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/02 20:15:39 Generating JWE encryption key
	2025/12/02 20:15:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/02 20:15:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/02 20:15:40 Initializing JWE encryption key from synchronized object
	2025/12/02 20:15:40 Creating in-cluster Sidecar client
	2025/12/02 20:15:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 20:15:40 Serving insecurely on HTTP port: 9090
	2025/12/02 20:16:10 Successful request to sidecar
	2025/12/02 20:15:39 Starting overwatch
	
	
	==> storage-provisioner [2d303907410c353c9045db9eaebf7131982a3e60b16b692caf5575cf95db0246] <==
	W1202 20:13:57.136997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:13:59.140736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:13:59.145530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:14:01.149086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:14:01.154186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:14:03.157531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:14:03.162728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:14:05.166310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:14:05.170636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:14:07.173630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:14:07.177898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:14:09.181710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:14:09.187661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:14:11.191021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:14:11.196021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:14:13.199489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:14:13.204461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:14:15.208033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:14:15.213265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:14:17.216230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:14:17.220195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:14:19.223007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:14:19.228215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:14:21.231652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:14:21.236374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [938430627e274a753817afe69bc393d3f30739ad499a1ec7912e8bbdf777c4f9] <==
	W1202 20:24:44.836053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:24:46.839694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:24:46.843741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:24:48.846630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:24:48.852171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:24:50.855458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:24:50.859194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:24:52.862524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:24:52.866723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:24:54.870199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:24:54.874699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:24:56.878108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:24:56.883222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:24:58.886215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:24:58.890579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:25:00.895044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:25:00.899357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:25:02.902398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:25:02.908593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:25:04.911863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:25:04.916088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:25:06.920399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:25:06.925238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:25:08.929005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:25:08.934092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-136749 -n functional-136749
helpers_test.go:269: (dbg) Run:  kubectl --context functional-136749 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-5758569b79-qj9rw hello-node-connect-9f67c86d4-xg9zf
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-136749 describe pod busybox-mount hello-node-5758569b79-qj9rw hello-node-connect-9f67c86d4-xg9zf
helpers_test.go:290: (dbg) kubectl --context functional-136749 describe pod busybox-mount hello-node-5758569b79-qj9rw hello-node-connect-9f67c86d4-xg9zf:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-136749/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 20:15:19 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://3ac6049adb119d55954aa9d5f031bc8327b3ae9b2d89d13d2699d80716708772
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 02 Dec 2025 20:15:21 +0000
	      Finished:     Tue, 02 Dec 2025 20:15:21 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mqkq5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-mqkq5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m51s  default-scheduler  Successfully assigned default/busybox-mount to functional-136749
	  Normal  Pulling    9m52s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m50s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.225s (2.225s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m50s  kubelet            Container created
	  Normal  Started    9m50s  kubelet            Container started
	
	
	Name:             hello-node-5758569b79-qj9rw
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-136749/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 20:15:02 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-69cdm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-69cdm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-5758569b79-qj9rw to functional-136749
	  Normal   Pulling    7m17s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m17s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m17s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     5m2s (x20 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m51s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-9f67c86d4-xg9zf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-136749/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 20:15:08 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xbzqk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xbzqk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-xg9zf to functional-136749
	  Normal   Pulling    6m58s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m58s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m58s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     5m (x20 over 10m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m47s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (603.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 image load --daemon kicbase/echo-server:functional-136749 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-136749" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-136749 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-136749 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-qj9rw" [470e3549-18e7-4af4-8fed-e1b2280b47cd] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-136749 -n functional-136749
functional_test.go:1460: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-02 20:25:03.193663933 +0000 UTC m=+1851.743446563
functional_test.go:1460: (dbg) Run:  kubectl --context functional-136749 describe po hello-node-5758569b79-qj9rw -n default
functional_test.go:1460: (dbg) kubectl --context functional-136749 describe po hello-node-5758569b79-qj9rw -n default:
Name:             hello-node-5758569b79-qj9rw
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-136749/192.168.49.2
Start Time:       Tue, 02 Dec 2025 20:15:02 +0000
Labels:           app=hello-node
pod-template-hash=5758569b79
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-5758569b79
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-69cdm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-69cdm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-5758569b79-qj9rw to functional-136749
Normal   Pulling    7m9s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m9s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m9s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m54s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m43s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-136749 logs hello-node-5758569b79-qj9rw -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-136749 logs hello-node-5758569b79-qj9rw -n default: exit status 1 (71.899302ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-5758569b79-qj9rw" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-136749 logs hello-node-5758569b79-qj9rw -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 image load --daemon kicbase/echo-server:functional-136749 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-136749" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-136749
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 image load --daemon kicbase/echo-server:functional-136749 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-136749" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 image save kicbase/echo-server:functional-136749 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1202 20:15:07.409036  469457 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:15:07.409350  469457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:15:07.409366  469457 out.go:374] Setting ErrFile to fd 2...
	I1202 20:15:07.409373  469457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:15:07.409677  469457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:15:07.410471  469457 config.go:182] Loaded profile config "functional-136749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:15:07.410620  469457 config.go:182] Loaded profile config "functional-136749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:15:07.411228  469457 cli_runner.go:164] Run: docker container inspect functional-136749 --format={{.State.Status}}
	I1202 20:15:07.430661  469457 ssh_runner.go:195] Run: systemctl --version
	I1202 20:15:07.430721  469457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-136749
	I1202 20:15:07.455674  469457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/functional-136749/id_rsa Username:docker}
	I1202 20:15:07.563185  469457 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1202 20:15:07.563280  469457 cache_images.go:255] Failed to load cached images for "functional-136749": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1202 20:15:07.563310  469457 cache_images.go:267] failed pushing to: functional-136749

                                                
                                                
** /stderr **
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-136749
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 image save --daemon kicbase/echo-server:functional-136749 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-136749
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-136749: exit status 1 (18.447372ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-136749

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-136749

                                                
                                                
** /stderr **
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136749 service --namespace=default --https --url hello-node: exit status 115 (566.616303ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31323
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-136749 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136749 service hello-node --url --format={{.IP}}: exit status 115 (554.655122ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-136749 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136749 service hello-node --url: exit status 115 (560.630328ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31323
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-136749 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31323
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.56s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.31s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-194418 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-194418 --output=json --user=testUser: exit status 80 (2.312209295s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bdca790e-69ae-4089-a3a5-9edb4b3a5a86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-194418 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"a60ad1bb-8abf-4ecb-b30f-33f20c5bfc1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-02T20:34:04Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"561e97fd-606d-460f-a8a3-8cb5e59c6e44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-194418 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.31s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.81s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-194418 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-194418 --output=json --user=testUser: exit status 80 (1.810549101s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2a428b39-76ee-46e6-a1bb-741ae3f58e86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-194418 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"e811028b-6a5c-423c-afb8-9f4478072160","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-02T20:34:06Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"dddb2425-0035-45ea-8d7d-df0150c84c9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-194418 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.81s)

                                                
                                    
x
+
TestPause/serial/Pause (5.68s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-796891 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-796891 --alsologtostderr -v=5: exit status 80 (1.780874516s)

                                                
                                                
-- stdout --
	* Pausing node pause-796891 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 20:47:30.583386  622477 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:47:30.583885  622477 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:47:30.583902  622477 out.go:374] Setting ErrFile to fd 2...
	I1202 20:47:30.583916  622477 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:47:30.584594  622477 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:47:30.585029  622477 out.go:368] Setting JSON to false
	I1202 20:47:30.585082  622477 mustload.go:66] Loading cluster: pause-796891
	I1202 20:47:30.585916  622477 config.go:182] Loaded profile config "pause-796891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:47:30.586423  622477 cli_runner.go:164] Run: docker container inspect pause-796891 --format={{.State.Status}}
	I1202 20:47:30.613383  622477 host.go:66] Checking if "pause-796891" exists ...
	I1202 20:47:30.613745  622477 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:47:30.696003  622477 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 20:47:30.683652557 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:47:30.696851  622477 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-796891 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1202 20:47:30.698902  622477 out.go:179] * Pausing node pause-796891 ... 
	I1202 20:47:30.701529  622477 host.go:66] Checking if "pause-796891" exists ...
	I1202 20:47:30.701882  622477 ssh_runner.go:195] Run: systemctl --version
	I1202 20:47:30.701936  622477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-796891
	I1202 20:47:30.732165  622477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33358 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/pause-796891/id_rsa Username:docker}
	I1202 20:47:30.844833  622477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:47:30.860041  622477 pause.go:52] kubelet running: true
	I1202 20:47:30.860118  622477 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 20:47:31.001118  622477 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 20:47:31.001206  622477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 20:47:31.082764  622477 cri.go:89] found id: "505fac029aa8ebe69ad93ef916db5bd1916f697eabe9a95ab19a1ef2cf11f065"
	I1202 20:47:31.082789  622477 cri.go:89] found id: "793ea7c3faa0f3cd934cc0b58d2de439c2b3bfb6db801cfd46e1ed4c7ddca010"
	I1202 20:47:31.082795  622477 cri.go:89] found id: "a1c9c0e1a152d0e55298e0b7dbd3ffe1cf2959ae52fbb82cfa7914940cd2e07f"
	I1202 20:47:31.082799  622477 cri.go:89] found id: "9318d85c2a3992238185b0b65f90b7afce8f42bd6fd6934e4563b4bfc16b05a8"
	I1202 20:47:31.082804  622477 cri.go:89] found id: "23ddd07bcb5f5b56d13f1e4f94e80d95f00c7337ca84530879e15c87231fe5a1"
	I1202 20:47:31.082808  622477 cri.go:89] found id: "960f6856e1cc1fc4d6d11314186574243ec408ec9cf17d2adcd795f4de63295e"
	I1202 20:47:31.082812  622477 cri.go:89] found id: "c11a8e42979ff1d43624fe1ca5a69467905d817d98358acd034602c84ec2d6c5"
	I1202 20:47:31.082816  622477 cri.go:89] found id: ""
	I1202 20:47:31.082860  622477 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 20:47:31.095647  622477 retry.go:31] will retry after 315.057235ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:47:31Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:47:31.411203  622477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:47:31.426606  622477 pause.go:52] kubelet running: false
	I1202 20:47:31.426662  622477 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 20:47:31.550924  622477 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 20:47:31.551012  622477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 20:47:31.628874  622477 cri.go:89] found id: "505fac029aa8ebe69ad93ef916db5bd1916f697eabe9a95ab19a1ef2cf11f065"
	I1202 20:47:31.628896  622477 cri.go:89] found id: "793ea7c3faa0f3cd934cc0b58d2de439c2b3bfb6db801cfd46e1ed4c7ddca010"
	I1202 20:47:31.628900  622477 cri.go:89] found id: "a1c9c0e1a152d0e55298e0b7dbd3ffe1cf2959ae52fbb82cfa7914940cd2e07f"
	I1202 20:47:31.628904  622477 cri.go:89] found id: "9318d85c2a3992238185b0b65f90b7afce8f42bd6fd6934e4563b4bfc16b05a8"
	I1202 20:47:31.628906  622477 cri.go:89] found id: "23ddd07bcb5f5b56d13f1e4f94e80d95f00c7337ca84530879e15c87231fe5a1"
	I1202 20:47:31.628909  622477 cri.go:89] found id: "960f6856e1cc1fc4d6d11314186574243ec408ec9cf17d2adcd795f4de63295e"
	I1202 20:47:31.628912  622477 cri.go:89] found id: "c11a8e42979ff1d43624fe1ca5a69467905d817d98358acd034602c84ec2d6c5"
	I1202 20:47:31.628916  622477 cri.go:89] found id: ""
	I1202 20:47:31.628964  622477 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 20:47:31.641552  622477 retry.go:31] will retry after 386.405051ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:47:31Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:47:32.029000  622477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:47:32.044258  622477 pause.go:52] kubelet running: false
	I1202 20:47:32.044310  622477 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 20:47:32.171817  622477 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 20:47:32.171892  622477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 20:47:32.256760  622477 cri.go:89] found id: "505fac029aa8ebe69ad93ef916db5bd1916f697eabe9a95ab19a1ef2cf11f065"
	I1202 20:47:32.256788  622477 cri.go:89] found id: "793ea7c3faa0f3cd934cc0b58d2de439c2b3bfb6db801cfd46e1ed4c7ddca010"
	I1202 20:47:32.256795  622477 cri.go:89] found id: "a1c9c0e1a152d0e55298e0b7dbd3ffe1cf2959ae52fbb82cfa7914940cd2e07f"
	I1202 20:47:32.256800  622477 cri.go:89] found id: "9318d85c2a3992238185b0b65f90b7afce8f42bd6fd6934e4563b4bfc16b05a8"
	I1202 20:47:32.256805  622477 cri.go:89] found id: "23ddd07bcb5f5b56d13f1e4f94e80d95f00c7337ca84530879e15c87231fe5a1"
	I1202 20:47:32.256810  622477 cri.go:89] found id: "960f6856e1cc1fc4d6d11314186574243ec408ec9cf17d2adcd795f4de63295e"
	I1202 20:47:32.256815  622477 cri.go:89] found id: "c11a8e42979ff1d43624fe1ca5a69467905d817d98358acd034602c84ec2d6c5"
	I1202 20:47:32.256819  622477 cri.go:89] found id: ""
	I1202 20:47:32.256867  622477 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 20:47:32.272099  622477 out.go:203] 
	W1202 20:47:32.278242  622477 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:47:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:47:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 20:47:32.278275  622477 out.go:285] * 
	* 
	W1202 20:47:32.284158  622477 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 20:47:32.285183  622477 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-796891 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-796891
helpers_test.go:243: (dbg) docker inspect pause-796891:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "36a013bef5c0dcfabb28c265ddb66930b8ec8f02141f5e1e9a8546cb86f55884",
	        "Created": "2025-12-02T20:46:39.882588595Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 605538,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T20:46:39.950248689Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/36a013bef5c0dcfabb28c265ddb66930b8ec8f02141f5e1e9a8546cb86f55884/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/36a013bef5c0dcfabb28c265ddb66930b8ec8f02141f5e1e9a8546cb86f55884/hostname",
	        "HostsPath": "/var/lib/docker/containers/36a013bef5c0dcfabb28c265ddb66930b8ec8f02141f5e1e9a8546cb86f55884/hosts",
	        "LogPath": "/var/lib/docker/containers/36a013bef5c0dcfabb28c265ddb66930b8ec8f02141f5e1e9a8546cb86f55884/36a013bef5c0dcfabb28c265ddb66930b8ec8f02141f5e1e9a8546cb86f55884-json.log",
	        "Name": "/pause-796891",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-796891:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-796891",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "36a013bef5c0dcfabb28c265ddb66930b8ec8f02141f5e1e9a8546cb86f55884",
	                "LowerDir": "/var/lib/docker/overlay2/a8e074569e09c63325dedcca9a73cf7004ec2a340680cadbecf8b2d807f3c814-init/diff:/var/lib/docker/overlay2/49bb44e987885c48600d5ae7ad3c81cad82a00f38070c0460882c5746d9fae59/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8e074569e09c63325dedcca9a73cf7004ec2a340680cadbecf8b2d807f3c814/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8e074569e09c63325dedcca9a73cf7004ec2a340680cadbecf8b2d807f3c814/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8e074569e09c63325dedcca9a73cf7004ec2a340680cadbecf8b2d807f3c814/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-796891",
	                "Source": "/var/lib/docker/volumes/pause-796891/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-796891",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-796891",
	                "name.minikube.sigs.k8s.io": "pause-796891",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "35dba8fe1ac630730e5b0d6943f1e3a3cfcff1179d763761909b7aea10119474",
	            "SandboxKey": "/var/run/docker/netns/35dba8fe1ac6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33358"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33359"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33362"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33360"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33361"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-796891": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b07d217213ccf6fa9962641c6794a6088506dac5cada57db9351fb7ca34bc5a1",
	                    "EndpointID": "66f6d63570d4030b8d5358423336079e5c35409459cf60e030d119c2df0ca300",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "d2:d5:22:fe:85:5b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-796891",
	                        "36a013bef5c0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-796891 -n pause-796891
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-796891 -n pause-796891: exit status 2 (388.897284ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-796891 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-796891 logs -n 25: (1.051790892s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                 ARGS                                                  │       PROFILE       │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kubenet-775392 sudo cat /var/lib/kubelet/config.yaml                                               │ kubenet-775392      │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p kubenet-775392 sudo systemctl status docker --all --full --no-pager                                │ kubenet-775392      │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p kubenet-775392 sudo systemctl cat docker --no-pager                                                │ kubenet-775392      │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p kubenet-775392 sudo cat /etc/docker/daemon.json                                                    │ kubenet-775392      │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ pause   │ -p pause-796891 --alsologtostderr -v=5                                                                │ pause-796891        │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p kubenet-775392 sudo docker system info                                                             │ kubenet-775392      │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p kubenet-775392 sudo systemctl status cri-docker --all --full --no-pager                            │ kubenet-775392      │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p kubenet-775392 sudo systemctl cat cri-docker --no-pager                                            │ kubenet-775392      │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p kubenet-775392 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                       │ kubenet-775392      │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p kubenet-775392 sudo cat /usr/lib/systemd/system/cri-docker.service                                 │ kubenet-775392      │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p kubenet-775392 sudo cri-dockerd --version                                                          │ kubenet-775392      │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p kubenet-775392 sudo systemctl status containerd --all --full --no-pager                            │ kubenet-775392      │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p kubenet-775392 sudo systemctl cat containerd --no-pager                                            │ kubenet-775392      │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p kubenet-775392 sudo cat /lib/systemd/system/containerd.service                                     │ kubenet-775392      │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p kubenet-775392 sudo cat /etc/containerd/config.toml                                                │ kubenet-775392      │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p kubenet-775392 sudo containerd config dump                                                         │ kubenet-775392      │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p kubenet-775392 sudo systemctl status crio --all --full --no-pager                                  │ kubenet-775392      │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p kubenet-775392 sudo systemctl cat crio --no-pager                                                  │ kubenet-775392      │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p kubenet-775392 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                        │ kubenet-775392      │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p kubenet-775392 sudo crio config                                                                    │ kubenet-775392      │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ delete  │ -p kubenet-775392                                                                                     │ kubenet-775392      │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │ 02 Dec 25 20:47 UTC │
	│ start   │ -p false-775392 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio │ false-775392        │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p NoKubernetes-811845 sudo systemctl is-active --quiet service kubelet                               │ NoKubernetes-811845 │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p false-775392 sudo cat /etc/nsswitch.conf                                                           │ false-775392        │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p false-775392 sudo cat /etc/hosts                                                                   │ false-775392        │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:47:31
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:47:31.911678  623164 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:47:31.911934  623164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:47:31.911943  623164 out.go:374] Setting ErrFile to fd 2...
	I1202 20:47:31.911948  623164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:47:31.912154  623164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:47:31.912632  623164 out.go:368] Setting JSON to false
	I1202 20:47:31.913868  623164 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8996,"bootTime":1764699456,"procs":280,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:47:31.913936  623164 start.go:143] virtualization: kvm guest
	I1202 20:47:31.915690  623164 out.go:179] * [false-775392] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 20:47:31.917091  623164 notify.go:221] Checking for updates...
	I1202 20:47:31.917098  623164 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:47:31.918439  623164 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:47:31.919811  623164 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:47:31.921107  623164 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 20:47:31.923096  623164 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:47:31.924420  623164 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:47:31.926232  623164 config.go:182] Loaded profile config "NoKubernetes-811845": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1202 20:47:31.926392  623164 config.go:182] Loaded profile config "pause-796891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:47:31.926505  623164 config.go:182] Loaded profile config "stopped-upgrade-814137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1202 20:47:31.926625  623164 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:47:31.951009  623164 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 20:47:31.951144  623164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:47:32.011361  623164 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 20:47:32.00108819 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:47:32.011480  623164 docker.go:319] overlay module found
	I1202 20:47:32.013143  623164 out.go:179] * Using the docker driver based on user configuration
	I1202 20:47:32.014250  623164 start.go:309] selected driver: docker
	I1202 20:47:32.014270  623164 start.go:927] validating driver "docker" against <nil>
	I1202 20:47:32.014283  623164 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:47:32.015834  623164 out.go:203] 
	W1202 20:47:32.016885  623164 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1202 20:47:32.018010  623164 out.go:203] 
	I1202 20:47:30.387896  619411 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:47:30.387945  619411 machine.go:97] duration metric: took 4.141394246s to provisionDockerMachine
	I1202 20:47:30.387964  619411 start.go:293] postStartSetup for "NoKubernetes-811845" (driver="docker")
	I1202 20:47:30.387976  619411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:47:30.388032  619411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:47:30.388103  619411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-811845
	I1202 20:47:30.411688  619411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/NoKubernetes-811845/id_rsa Username:docker}
	I1202 20:47:30.523613  619411 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:47:30.529750  619411 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:47:30.529775  619411 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:47:30.529788  619411 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 20:47:30.529854  619411 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 20:47:30.529944  619411 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem -> 4110322.pem in /etc/ssl/certs
	I1202 20:47:30.530126  619411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:47:30.539695  619411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:47:30.562565  619411 start.go:296] duration metric: took 174.586868ms for postStartSetup
	I1202 20:47:30.562655  619411 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:47:30.562695  619411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-811845
	I1202 20:47:30.591389  619411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/NoKubernetes-811845/id_rsa Username:docker}
	I1202 20:47:30.704236  619411 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:47:30.710336  619411 fix.go:56] duration metric: took 4.823827565s for fixHost
	I1202 20:47:30.710356  619411 start.go:83] releasing machines lock for "NoKubernetes-811845", held for 4.823867051s
	I1202 20:47:30.710414  619411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-811845
	I1202 20:47:30.743291  619411 ssh_runner.go:195] Run: cat /version.json
	I1202 20:47:30.743352  619411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:47:30.743375  619411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-811845
	I1202 20:47:30.743463  619411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-811845
	I1202 20:47:30.768939  619411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/NoKubernetes-811845/id_rsa Username:docker}
	I1202 20:47:30.774027  619411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/NoKubernetes-811845/id_rsa Username:docker}
	I1202 20:47:30.872915  619411 ssh_runner.go:195] Run: systemctl --version
	I1202 20:47:30.943442  619411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:47:30.981146  619411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:47:30.986721  619411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:47:30.986789  619411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:47:30.996884  619411 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 20:47:30.996899  619411 start.go:496] detecting cgroup driver to use...
	I1202 20:47:30.996964  619411 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 20:47:30.997003  619411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:47:31.013620  619411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:47:31.029873  619411 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:47:31.029948  619411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:47:31.048621  619411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:47:31.065571  619411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:47:31.160371  619411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:47:31.253148  619411 docker.go:234] disabling docker service ...
	I1202 20:47:31.253234  619411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:47:31.268898  619411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:47:31.283090  619411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:47:31.369321  619411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:47:31.470084  619411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:47:31.487031  619411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:47:31.502857  619411 download.go:108] Downloading: https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm.sha1 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/linux/amd64/v0.0.0/kubeadm
	I1202 20:47:32.091207  619411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1202 20:47:32.091259  619411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:32.102152  619411 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 20:47:32.102212  619411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:32.111653  619411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:32.122534  619411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:32.133388  619411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:47:32.142615  619411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:47:32.151236  619411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:47:32.159918  619411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:47:32.256579  619411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:47:32.409058  619411 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:47:32.409191  619411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:47:32.414637  619411 start.go:564] Will wait 60s for crictl version
	I1202 20:47:32.414692  619411 ssh_runner.go:195] Run: which crictl
	I1202 20:47:32.419287  619411 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:47:32.450646  619411 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:47:32.450747  619411 ssh_runner.go:195] Run: crio --version
	I1202 20:47:32.481836  619411 ssh_runner.go:195] Run: crio --version
	I1202 20:47:32.517650  619411 out.go:179] * Preparing CRI-O 1.34.2 ...
	I1202 20:47:32.519237  619411 ssh_runner.go:195] Run: rm -f paused
	I1202 20:47:32.524718  619411 out.go:179] * Done! minikube is ready without Kubernetes!
	I1202 20:47:32.527409  619411 out.go:203] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	
	
	==> CRI-O <==
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.072544957Z" level=info msg="RDT not available in the host system"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.07255457Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.073413745Z" level=info msg="Conmon does support the --sync option"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.073433162Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.073446947Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.07423415Z" level=info msg="Conmon does support the --sync option"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.074253868Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.078594058Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.078622515Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.079390918Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.079938736Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.08000555Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.184744821Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-w92qd Namespace:kube-system ID:6c3d967b27b7f06f6d171c8def754307d88c748adfb2831a0fc462e8ae9b3d37 UID:0cb44388-0b22-4297-9ab4-151f169fe011 NetNS:/var/run/netns/db9b6b8d-8c50-42a7-afec-53553a423c11 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00082e2b8}] Aliases:map[]}"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.185020376Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-w92qd for CNI network kindnet (type=ptp)"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.185658078Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.185690212Z" level=info msg="Starting seccomp notifier watcher"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.185742733Z" level=info msg="Create NRI interface"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.185865315Z" level=info msg="built-in NRI default validator is disabled"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.185875859Z" level=info msg="runtime interface created"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.185890104Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.185898987Z" level=info msg="runtime interface starting up..."
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.185906268Z" level=info msg="starting plugins..."
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.185921754Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.186675746Z" level=info msg="No systemd watchdog enabled"
	Dec 02 20:47:27 pause-796891 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	505fac029aa8e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   11 seconds ago      Running             coredns                   0                   6c3d967b27b7f       coredns-66bc5c9577-w92qd               kube-system
	793ea7c3faa0f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   23 seconds ago      Running             kindnet-cni               0                   08311a30b99c8       kindnet-vc9rd                          kube-system
	a1c9c0e1a152d       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   23 seconds ago      Running             kube-proxy                0                   52209ee70b167       kube-proxy-xkrx5                       kube-system
	9318d85c2a399       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   35 seconds ago      Running             kube-scheduler            0                   f62bb4ebe62c5       kube-scheduler-pause-796891            kube-system
	23ddd07bcb5f5       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   35 seconds ago      Running             kube-controller-manager   0                   0429cbc6bc472       kube-controller-manager-pause-796891   kube-system
	960f6856e1cc1       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   35 seconds ago      Running             kube-apiserver            0                   f4cdfd1798bc1       kube-apiserver-pause-796891            kube-system
	c11a8e42979ff       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   35 seconds ago      Running             etcd                      0                   728c6d8e8976f       etcd-pause-796891                      kube-system
	
	
	==> coredns [505fac029aa8ebe69ad93ef916db5bd1916f697eabe9a95ab19a1ef2cf11f065] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34758 - 18108 "HINFO IN 2593030726178042316.6447336225444708699. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01897556s
	
	
	==> describe nodes <==
	Name:               pause-796891
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-796891
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=pause-796891
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T20_47_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 20:47:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-796891
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:47:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:47:25 +0000   Tue, 02 Dec 2025 20:46:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:47:25 +0000   Tue, 02 Dec 2025 20:46:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:47:25 +0000   Tue, 02 Dec 2025 20:46:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:47:25 +0000   Tue, 02 Dec 2025 20:47:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-796891
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                fd3db284-124c-47b4-9667-a25865819ac7
	  Boot ID:                    9dd5456f-e394-4b4b-9458-48d26faf507e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-w92qd                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-pause-796891                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         28s
	  kube-system                 kindnet-vc9rd                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-pause-796891             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-pause-796891    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-xkrx5                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-pause-796891             100m (1%)     0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22s                kube-proxy       
	  Normal  Starting                 38s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s (x8 over 38s)  kubelet          Node pause-796891 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 38s)  kubelet          Node pause-796891 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x8 over 38s)  kubelet          Node pause-796891 status is now: NodeHasSufficientPID
	  Normal  Starting                 28s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s                kubelet          Node pause-796891 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s                kubelet          Node pause-796891 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s                kubelet          Node pause-796891 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s                node-controller  Node pause-796891 event: Registered Node pause-796891 in Controller
	  Normal  NodeReady                12s                kubelet          Node pause-796891 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e f4 c0 f2 56 fb 08 06
	[  +0.000355] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 95 9a 02 fc fb 08 06
	[Dec 2 19:57] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000013] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +1.020139] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +1.023921] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +1.023940] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +1.023881] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +2.047855] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +4.031797] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +8.191553] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[Dec 2 19:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[ +32.254238] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	
	
	==> etcd [c11a8e42979ff1d43624fe1ca5a69467905d817d98358acd034602c84ec2d6c5] <==
	{"level":"warn","ts":"2025-12-02T20:47:02.117112Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.792651ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597460941008902 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/view\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/view\" value_size:673 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-12-02T20:47:02.117186Z","caller":"traceutil/trace.go:172","msg":"trace[1512788607] transaction","detail":"{read_only:false; response_revision:88; number_of_response:1; }","duration":"253.692652ms","start":"2025-12-02T20:47:01.863482Z","end":"2025-12-02T20:47:02.117174Z","steps":["trace[1512788607] 'process raft request'  (duration: 126.762258ms)","trace[1512788607] 'compare'  (duration: 126.661129ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T20:47:02.117131Z","caller":"traceutil/trace.go:172","msg":"trace[935880625] range","detail":"{range_begin:/registry/events/default/pause-796891.187d80f458bff806; range_end:; response_count:1; response_revision:87; }","duration":"252.942402ms","start":"2025-12-02T20:47:01.864172Z","end":"2025-12-02T20:47:02.117115Z","steps":["trace[935880625] 'agreement among raft nodes before linearized reading'  (duration: 126.115098ms)","trace[935880625] 'range keys from in-memory index tree'  (duration: 126.590521ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T20:47:02.576596Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"255.399054ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597460941008911 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:aggregate-to-edit\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:aggregate-to-edit\" value_size:2065 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-12-02T20:47:02.576668Z","caller":"traceutil/trace.go:172","msg":"trace[219182840] transaction","detail":"{read_only:false; response_revision:92; number_of_response:1; }","duration":"382.89349ms","start":"2025-12-02T20:47:02.193764Z","end":"2025-12-02T20:47:02.576658Z","steps":["trace[219182840] 'process raft request'  (duration: 127.393118ms)","trace[219182840] 'compare'  (duration: 255.299282ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T20:47:02.576699Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-02T20:47:02.193747Z","time spent":"382.941738ms","remote":"127.0.0.1:37502","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2120,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/clusterroles/system:aggregate-to-edit\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:aggregate-to-edit\" value_size:2065 >> failure:<>"}
	{"level":"info","ts":"2025-12-02T20:47:02.579263Z","caller":"traceutil/trace.go:172","msg":"trace[509960738] linearizableReadLoop","detail":"{readStateIndex:97; appliedIndex:97; }","duration":"163.172703ms","start":"2025-12-02T20:47:02.416060Z","end":"2025-12-02T20:47:02.579233Z","steps":["trace[509960738] 'read index received'  (duration: 163.16773ms)","trace[509960738] 'applied index is now lower than readState.Index'  (duration: 3.824µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T20:47:02.579359Z","caller":"traceutil/trace.go:172","msg":"trace[1534031233] transaction","detail":"{read_only:false; response_revision:93; number_of_response:1; }","duration":"385.164378ms","start":"2025-12-02T20:47:02.194186Z","end":"2025-12-02T20:47:02.579350Z","steps":["trace[1534031233] 'process raft request'  (duration: 385.08578ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T20:47:02.579381Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"163.317542ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-02T20:47:02.579406Z","caller":"traceutil/trace.go:172","msg":"trace[809125091] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:93; }","duration":"163.364285ms","start":"2025-12-02T20:47:02.416035Z","end":"2025-12-02T20:47:02.579400Z","steps":["trace[809125091] 'agreement among raft nodes before linearized reading'  (duration: 163.303995ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T20:47:02.579595Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-02T20:47:02.194167Z","time spent":"385.216613ms","remote":"127.0.0.1:36864","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":670,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-796891.187d80f458bfc9fd\" mod_revision:87 > success:<request_put:<key:\"/registry/events/default/pause-796891.187d80f458bfc9fd\" value_size:598 lease:499225424086233048 >> failure:<request_range:<key:\"/registry/events/default/pause-796891.187d80f458bfc9fd\" > >"}
	{"level":"info","ts":"2025-12-02T20:47:02.710353Z","caller":"traceutil/trace.go:172","msg":"trace[1414011434] linearizableReadLoop","detail":"{readStateIndex:98; appliedIndex:98; }","duration":"128.354047ms","start":"2025-12-02T20:47:02.581968Z","end":"2025-12-02T20:47:02.710322Z","steps":["trace[1414011434] 'read index received'  (duration: 128.346573ms)","trace[1414011434] 'applied index is now lower than readState.Index'  (duration: 6.427µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T20:47:02.815371Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"233.383747ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/pause-796891.187d80f458bff806\" limit:1 ","response":"range_response_count:1 size:678"}
	{"level":"info","ts":"2025-12-02T20:47:02.815451Z","caller":"traceutil/trace.go:172","msg":"trace[354733449] range","detail":"{range_begin:/registry/events/default/pause-796891.187d80f458bff806; range_end:; response_count:1; response_revision:93; }","duration":"233.477157ms","start":"2025-12-02T20:47:02.581959Z","end":"2025-12-02T20:47:02.815436Z","steps":["trace[354733449] 'agreement among raft nodes before linearized reading'  (duration: 128.489763ms)","trace[354733449] 'range keys from in-memory index tree'  (duration: 104.773392ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T20:47:02.815399Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.997987ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597460941008916 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:aggregate-to-view\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:aggregate-to-view\" value_size:1962 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-12-02T20:47:02.815584Z","caller":"traceutil/trace.go:172","msg":"trace[106614381] transaction","detail":"{read_only:false; response_revision:94; number_of_response:1; }","duration":"233.796379ms","start":"2025-12-02T20:47:02.581769Z","end":"2025-12-02T20:47:02.815565Z","steps":["trace[106614381] 'process raft request'  (duration: 128.583943ms)","trace[106614381] 'compare'  (duration: 104.885776ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T20:47:02.989675Z","caller":"traceutil/trace.go:172","msg":"trace[1208988174] linearizableReadLoop","detail":"{readStateIndex:101; appliedIndex:101; }","duration":"138.393928ms","start":"2025-12-02T20:47:02.851247Z","end":"2025-12-02T20:47:02.989641Z","steps":["trace[1208988174] 'read index received'  (duration: 138.38224ms)","trace[1208988174] 'applied index is now lower than readState.Index'  (duration: 10.433µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T20:47:03.206132Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"354.86248ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:node\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-02T20:47:03.206204Z","caller":"traceutil/trace.go:172","msg":"trace[1317133690] range","detail":"{range_begin:/registry/clusterroles/system:node; range_end:; response_count:0; response_revision:96; }","duration":"354.949058ms","start":"2025-12-02T20:47:02.851239Z","end":"2025-12-02T20:47:03.206188Z","steps":["trace[1317133690] 'agreement among raft nodes before linearized reading'  (duration: 138.496472ms)","trace[1317133690] 'range keys from in-memory index tree'  (duration: 216.311905ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T20:47:03.206244Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-02T20:47:02.851229Z","time spent":"355.004422ms","remote":"127.0.0.1:37502","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":0,"response size":27,"request content":"key:\"/registry/clusterroles/system:node\" limit:1 "}
	{"level":"warn","ts":"2025-12-02T20:47:03.206316Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"216.563447ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597460941008922 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-796891.187d80f458c014ff\" mod_revision:91 > success:<request_put:<key:\"/registry/events/default/pause-796891.187d80f458c014ff\" value_size:592 lease:499225424086233048 >> failure:<request_range:<key:\"/registry/events/default/pause-796891.187d80f458c014ff\" > >>","response":"size:14"}
	{"level":"info","ts":"2025-12-02T20:47:03.206442Z","caller":"traceutil/trace.go:172","msg":"trace[1179217833] transaction","detail":"{read_only:false; response_revision:98; number_of_response:1; }","duration":"346.152304ms","start":"2025-12-02T20:47:02.860280Z","end":"2025-12-02T20:47:03.206432Z","steps":["trace[1179217833] 'process raft request'  (duration: 346.095279ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T20:47:03.206490Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-02T20:47:02.860259Z","time spent":"346.207036ms","remote":"127.0.0.1:37106","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5823,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-796891\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-796891\" value_size:5752 >> failure:<>"}
	{"level":"info","ts":"2025-12-02T20:47:03.206463Z","caller":"traceutil/trace.go:172","msg":"trace[228947112] transaction","detail":"{read_only:false; response_revision:97; number_of_response:1; }","duration":"355.377284ms","start":"2025-12-02T20:47:02.851061Z","end":"2025-12-02T20:47:03.206438Z","steps":["trace[228947112] 'process raft request'  (duration: 138.638687ms)","trace[228947112] 'compare'  (duration: 216.463207ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T20:47:03.207305Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-02T20:47:02.851038Z","time spent":"356.076959ms","remote":"127.0.0.1:36864","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":664,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-796891.187d80f458c014ff\" mod_revision:91 > success:<request_put:<key:\"/registry/events/default/pause-796891.187d80f458c014ff\" value_size:592 lease:499225424086233048 >> failure:<request_range:<key:\"/registry/events/default/pause-796891.187d80f458c014ff\" > >"}
	
	
	==> kernel <==
	 20:47:33 up  2:29,  0 user,  load average: 3.73, 1.76, 1.34
	Linux pause-796891 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [793ea7c3faa0f3cd934cc0b58d2de439c2b3bfb6db801cfd46e1ed4c7ddca010] <==
	I1202 20:47:10.577835       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 20:47:10.578945       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1202 20:47:10.579214       1 main.go:148] setting mtu 1500 for CNI 
	I1202 20:47:10.579241       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 20:47:10.579263       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T20:47:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 20:47:10.974660       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 20:47:10.975034       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 20:47:10.984061       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 20:47:10.985136       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 20:47:11.085638       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 20:47:11.086284       1 metrics.go:72] Registering metrics
	I1202 20:47:11.086372       1 controller.go:711] "Syncing nftables rules"
	I1202 20:47:20.796174       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 20:47:20.796244       1 main.go:301] handling current node
	I1202 20:47:30.796188       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 20:47:30.796222       1 main.go:301] handling current node
	
	
	==> kube-apiserver [960f6856e1cc1fc4d6d11314186574243ec408ec9cf17d2adcd795f4de63295e] <==
	I1202 20:47:00.298291       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1202 20:47:00.319808       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 20:47:00.336213       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1202 20:47:00.336783       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:47:00.345658       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1202 20:47:00.345816       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1202 20:47:00.345921       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:47:00.461197       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 20:47:01.147255       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1202 20:47:01.384345       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1202 20:47:01.384367       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 20:47:03.986171       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 20:47:04.056555       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 20:47:04.145722       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 20:47:04.235793       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1202 20:47:04.255740       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1202 20:47:04.258984       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 20:47:04.265314       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 20:47:05.249180       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 20:47:05.260048       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1202 20:47:05.270383       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 20:47:09.842900       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:47:09.846740       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:47:09.890568       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1202 20:47:10.157411       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [23ddd07bcb5f5b56d13f1e4f94e80d95f00c7337ca84530879e15c87231fe5a1] <==
	I1202 20:47:09.137270       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1202 20:47:09.137432       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1202 20:47:09.137449       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1202 20:47:09.137491       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1202 20:47:09.137574       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 20:47:09.137692       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1202 20:47:09.137762       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1202 20:47:09.137892       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1202 20:47:09.138138       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 20:47:09.138201       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1202 20:47:09.138232       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1202 20:47:09.141097       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 20:47:09.142230       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1202 20:47:09.143187       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1202 20:47:09.143275       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1202 20:47:09.143326       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1202 20:47:09.143332       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1202 20:47:09.143340       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1202 20:47:09.145254       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 20:47:09.147407       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1202 20:47:09.149653       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1202 20:47:09.150848       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1202 20:47:09.153267       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-796891" podCIDRs=["10.244.0.0/24"]
	I1202 20:47:09.164346       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 20:47:24.091289       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a1c9c0e1a152d0e55298e0b7dbd3ffe1cf2959ae52fbb82cfa7914940cd2e07f] <==
	I1202 20:47:10.356220       1 server_linux.go:53] "Using iptables proxy"
	I1202 20:47:10.425240       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 20:47:10.525337       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 20:47:10.525378       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1202 20:47:10.525467       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 20:47:10.553178       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 20:47:10.553243       1 server_linux.go:132] "Using iptables Proxier"
	I1202 20:47:10.559458       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 20:47:10.559814       1 server.go:527] "Version info" version="v1.34.2"
	I1202 20:47:10.559844       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:47:10.561408       1 config.go:200] "Starting service config controller"
	I1202 20:47:10.561433       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 20:47:10.561412       1 config.go:106] "Starting endpoint slice config controller"
	I1202 20:47:10.561463       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 20:47:10.561490       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 20:47:10.561497       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 20:47:10.562012       1 config.go:309] "Starting node config controller"
	I1202 20:47:10.562029       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 20:47:10.562036       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 20:47:10.662062       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 20:47:10.662132       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 20:47:10.662137       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9318d85c2a3992238185b0b65f90b7afce8f42bd6fd6934e4563b4bfc16b05a8] <==
	E1202 20:47:01.127568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 20:47:01.150144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1202 20:47:01.196365       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 20:47:01.264603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 20:47:01.322554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 20:47:01.367132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 20:47:01.371498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 20:47:01.385266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 20:47:01.467538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 20:47:01.505118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 20:47:01.515426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 20:47:01.516318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 20:47:01.536415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 20:47:01.758917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 20:47:01.771212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 20:47:01.805733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 20:47:01.807860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 20:47:03.083358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 20:47:03.241910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 20:47:03.346376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 20:47:03.494007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 20:47:03.577572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 20:47:03.695315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 20:47:03.965881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1202 20:47:08.866491       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 20:47:06 pause-796891 kubelet[1296]: E1202 20:47:06.237275    1296 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-796891\" already exists" pod="kube-system/kube-apiserver-pause-796891"
	Dec 02 20:47:06 pause-796891 kubelet[1296]: I1202 20:47:06.245497    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-796891" podStartSLOduration=4.245471856 podStartE2EDuration="4.245471856s" podCreationTimestamp="2025-12-02 20:47:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:47:06.243558205 +0000 UTC m=+1.173508374" watchObservedRunningTime="2025-12-02 20:47:06.245471856 +0000 UTC m=+1.175422018"
	Dec 02 20:47:06 pause-796891 kubelet[1296]: I1202 20:47:06.257948    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-796891" podStartSLOduration=1.257925176 podStartE2EDuration="1.257925176s" podCreationTimestamp="2025-12-02 20:47:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:47:06.256906872 +0000 UTC m=+1.186857034" watchObservedRunningTime="2025-12-02 20:47:06.257925176 +0000 UTC m=+1.187875344"
	Dec 02 20:47:06 pause-796891 kubelet[1296]: I1202 20:47:06.285899    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-796891" podStartSLOduration=1.285872263 podStartE2EDuration="1.285872263s" podCreationTimestamp="2025-12-02 20:47:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:47:06.271056962 +0000 UTC m=+1.201007152" watchObservedRunningTime="2025-12-02 20:47:06.285872263 +0000 UTC m=+1.215822430"
	Dec 02 20:47:06 pause-796891 kubelet[1296]: I1202 20:47:06.286101    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-796891" podStartSLOduration=1.2860546130000001 podStartE2EDuration="1.286054613s" podCreationTimestamp="2025-12-02 20:47:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:47:06.284543136 +0000 UTC m=+1.214493303" watchObservedRunningTime="2025-12-02 20:47:06.286054613 +0000 UTC m=+1.216004782"
	Dec 02 20:47:09 pause-796891 kubelet[1296]: I1202 20:47:09.212483    1296 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 02 20:47:09 pause-796891 kubelet[1296]: I1202 20:47:09.213647    1296 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 02 20:47:09 pause-796891 kubelet[1296]: I1202 20:47:09.996246    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qqg5\" (UniqueName: \"kubernetes.io/projected/9ad656d0-4a36-414b-9a25-f870fddf8d50-kube-api-access-9qqg5\") pod \"kube-proxy-xkrx5\" (UID: \"9ad656d0-4a36-414b-9a25-f870fddf8d50\") " pod="kube-system/kube-proxy-xkrx5"
	Dec 02 20:47:09 pause-796891 kubelet[1296]: I1202 20:47:09.996302    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcth7\" (UniqueName: \"kubernetes.io/projected/48eea110-a9bf-4374-bef9-b48a882a20c4-kube-api-access-qcth7\") pod \"kindnet-vc9rd\" (UID: \"48eea110-a9bf-4374-bef9-b48a882a20c4\") " pod="kube-system/kindnet-vc9rd"
	Dec 02 20:47:09 pause-796891 kubelet[1296]: I1202 20:47:09.996335    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9ad656d0-4a36-414b-9a25-f870fddf8d50-kube-proxy\") pod \"kube-proxy-xkrx5\" (UID: \"9ad656d0-4a36-414b-9a25-f870fddf8d50\") " pod="kube-system/kube-proxy-xkrx5"
	Dec 02 20:47:09 pause-796891 kubelet[1296]: I1202 20:47:09.996425    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ad656d0-4a36-414b-9a25-f870fddf8d50-xtables-lock\") pod \"kube-proxy-xkrx5\" (UID: \"9ad656d0-4a36-414b-9a25-f870fddf8d50\") " pod="kube-system/kube-proxy-xkrx5"
	Dec 02 20:47:09 pause-796891 kubelet[1296]: I1202 20:47:09.996493    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ad656d0-4a36-414b-9a25-f870fddf8d50-lib-modules\") pod \"kube-proxy-xkrx5\" (UID: \"9ad656d0-4a36-414b-9a25-f870fddf8d50\") " pod="kube-system/kube-proxy-xkrx5"
	Dec 02 20:47:09 pause-796891 kubelet[1296]: I1202 20:47:09.996522    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48eea110-a9bf-4374-bef9-b48a882a20c4-lib-modules\") pod \"kindnet-vc9rd\" (UID: \"48eea110-a9bf-4374-bef9-b48a882a20c4\") " pod="kube-system/kindnet-vc9rd"
	Dec 02 20:47:09 pause-796891 kubelet[1296]: I1202 20:47:09.996620    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/48eea110-a9bf-4374-bef9-b48a882a20c4-cni-cfg\") pod \"kindnet-vc9rd\" (UID: \"48eea110-a9bf-4374-bef9-b48a882a20c4\") " pod="kube-system/kindnet-vc9rd"
	Dec 02 20:47:09 pause-796891 kubelet[1296]: I1202 20:47:09.996671    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48eea110-a9bf-4374-bef9-b48a882a20c4-xtables-lock\") pod \"kindnet-vc9rd\" (UID: \"48eea110-a9bf-4374-bef9-b48a882a20c4\") " pod="kube-system/kindnet-vc9rd"
	Dec 02 20:47:11 pause-796891 kubelet[1296]: I1202 20:47:11.256734    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-vc9rd" podStartSLOduration=2.256706066 podStartE2EDuration="2.256706066s" podCreationTimestamp="2025-12-02 20:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:47:11.242062869 +0000 UTC m=+6.172013036" watchObservedRunningTime="2025-12-02 20:47:11.256706066 +0000 UTC m=+6.186656240"
	Dec 02 20:47:11 pause-796891 kubelet[1296]: I1202 20:47:11.275871    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xkrx5" podStartSLOduration=2.275847085 podStartE2EDuration="2.275847085s" podCreationTimestamp="2025-12-02 20:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:47:11.275484603 +0000 UTC m=+6.205434769" watchObservedRunningTime="2025-12-02 20:47:11.275847085 +0000 UTC m=+6.205797252"
	Dec 02 20:47:21 pause-796891 kubelet[1296]: I1202 20:47:21.213339    1296 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 02 20:47:21 pause-796891 kubelet[1296]: I1202 20:47:21.284185    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cb44388-0b22-4297-9ab4-151f169fe011-config-volume\") pod \"coredns-66bc5c9577-w92qd\" (UID: \"0cb44388-0b22-4297-9ab4-151f169fe011\") " pod="kube-system/coredns-66bc5c9577-w92qd"
	Dec 02 20:47:21 pause-796891 kubelet[1296]: I1202 20:47:21.284268    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjbc9\" (UniqueName: \"kubernetes.io/projected/0cb44388-0b22-4297-9ab4-151f169fe011-kube-api-access-wjbc9\") pod \"coredns-66bc5c9577-w92qd\" (UID: \"0cb44388-0b22-4297-9ab4-151f169fe011\") " pod="kube-system/coredns-66bc5c9577-w92qd"
	Dec 02 20:47:22 pause-796891 kubelet[1296]: I1202 20:47:22.281979    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-w92qd" podStartSLOduration=12.281955168 podStartE2EDuration="12.281955168s" podCreationTimestamp="2025-12-02 20:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:47:22.270392246 +0000 UTC m=+17.200342437" watchObservedRunningTime="2025-12-02 20:47:22.281955168 +0000 UTC m=+17.211905335"
	Dec 02 20:47:30 pause-796891 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 20:47:30 pause-796891 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 20:47:30 pause-796891 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 20:47:30 pause-796891 systemd[1]: kubelet.service: Consumed 1.259s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-796891 -n pause-796891
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-796891 -n pause-796891: exit status 2 (373.40632ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-796891 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-796891
helpers_test.go:243: (dbg) docker inspect pause-796891:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "36a013bef5c0dcfabb28c265ddb66930b8ec8f02141f5e1e9a8546cb86f55884",
	        "Created": "2025-12-02T20:46:39.882588595Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 605538,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T20:46:39.950248689Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/36a013bef5c0dcfabb28c265ddb66930b8ec8f02141f5e1e9a8546cb86f55884/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/36a013bef5c0dcfabb28c265ddb66930b8ec8f02141f5e1e9a8546cb86f55884/hostname",
	        "HostsPath": "/var/lib/docker/containers/36a013bef5c0dcfabb28c265ddb66930b8ec8f02141f5e1e9a8546cb86f55884/hosts",
	        "LogPath": "/var/lib/docker/containers/36a013bef5c0dcfabb28c265ddb66930b8ec8f02141f5e1e9a8546cb86f55884/36a013bef5c0dcfabb28c265ddb66930b8ec8f02141f5e1e9a8546cb86f55884-json.log",
	        "Name": "/pause-796891",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-796891:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-796891",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "36a013bef5c0dcfabb28c265ddb66930b8ec8f02141f5e1e9a8546cb86f55884",
	                "LowerDir": "/var/lib/docker/overlay2/a8e074569e09c63325dedcca9a73cf7004ec2a340680cadbecf8b2d807f3c814-init/diff:/var/lib/docker/overlay2/49bb44e987885c48600d5ae7ad3c81cad82a00f38070c0460882c5746d9fae59/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8e074569e09c63325dedcca9a73cf7004ec2a340680cadbecf8b2d807f3c814/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8e074569e09c63325dedcca9a73cf7004ec2a340680cadbecf8b2d807f3c814/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8e074569e09c63325dedcca9a73cf7004ec2a340680cadbecf8b2d807f3c814/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-796891",
	                "Source": "/var/lib/docker/volumes/pause-796891/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-796891",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-796891",
	                "name.minikube.sigs.k8s.io": "pause-796891",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "35dba8fe1ac630730e5b0d6943f1e3a3cfcff1179d763761909b7aea10119474",
	            "SandboxKey": "/var/run/docker/netns/35dba8fe1ac6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33358"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33359"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33362"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33360"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33361"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-796891": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b07d217213ccf6fa9962641c6794a6088506dac5cada57db9351fb7ca34bc5a1",
	                    "EndpointID": "66f6d63570d4030b8d5358423336079e5c35409459cf60e030d119c2df0ca300",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "d2:d5:22:fe:85:5b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-796891",
	                        "36a013bef5c0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-796891 -n pause-796891
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-796891 -n pause-796891: exit status 2 (367.953849ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-796891 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-796891 logs -n 25: (1.133802068s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                 ARGS                                                  │       PROFILE       │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kubenet-775392 sudo systemctl status crio --all --full --no-pager                                  │ kubenet-775392      │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p kubenet-775392 sudo systemctl cat crio --no-pager                                                  │ kubenet-775392      │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p kubenet-775392 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                        │ kubenet-775392      │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p kubenet-775392 sudo crio config                                                                    │ kubenet-775392      │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ delete  │ -p kubenet-775392                                                                                     │ kubenet-775392      │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │ 02 Dec 25 20:47 UTC │
	│ start   │ -p false-775392 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio │ false-775392        │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p NoKubernetes-811845 sudo systemctl is-active --quiet service kubelet                               │ NoKubernetes-811845 │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p false-775392 sudo cat /etc/nsswitch.conf                                                           │ false-775392        │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p false-775392 sudo cat /etc/hosts                                                                   │ false-775392        │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p false-775392 sudo cat /etc/resolv.conf                                                             │ false-775392        │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p false-775392 sudo crictl pods                                                                      │ false-775392        │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ delete  │ -p NoKubernetes-811845                                                                                │ NoKubernetes-811845 │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p false-775392 sudo crictl ps --all                                                                  │ false-775392        │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p false-775392 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                           │ false-775392        │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p false-775392 sudo ip a s                                                                           │ false-775392        │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p false-775392 sudo ip r s                                                                           │ false-775392        │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p false-775392 sudo iptables-save                                                                    │ false-775392        │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p false-775392 sudo iptables -t nat -L -n -v                                                         │ false-775392        │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p false-775392 sudo systemctl status kubelet --all --full --no-pager                                 │ false-775392        │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p false-775392 sudo systemctl cat kubelet --no-pager                                                 │ false-775392        │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p false-775392 sudo journalctl -xeu kubelet --all --full --no-pager                                  │ false-775392        │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p false-775392 sudo cat /etc/kubernetes/kubelet.conf                                                 │ false-775392        │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p false-775392 sudo cat /var/lib/kubelet/config.yaml                                                 │ false-775392        │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p false-775392 sudo systemctl status docker --all --full --no-pager                                  │ false-775392        │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	│ ssh     │ -p false-775392 sudo systemctl cat docker --no-pager                                                  │ false-775392        │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:47:31
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:47:31.911678  623164 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:47:31.911934  623164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:47:31.911943  623164 out.go:374] Setting ErrFile to fd 2...
	I1202 20:47:31.911948  623164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:47:31.912154  623164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:47:31.912632  623164 out.go:368] Setting JSON to false
	I1202 20:47:31.913868  623164 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8996,"bootTime":1764699456,"procs":280,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:47:31.913936  623164 start.go:143] virtualization: kvm guest
	I1202 20:47:31.915690  623164 out.go:179] * [false-775392] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 20:47:31.917091  623164 notify.go:221] Checking for updates...
	I1202 20:47:31.917098  623164 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:47:31.918439  623164 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:47:31.919811  623164 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:47:31.921107  623164 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 20:47:31.923096  623164 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:47:31.924420  623164 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:47:31.926232  623164 config.go:182] Loaded profile config "NoKubernetes-811845": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1202 20:47:31.926392  623164 config.go:182] Loaded profile config "pause-796891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:47:31.926505  623164 config.go:182] Loaded profile config "stopped-upgrade-814137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1202 20:47:31.926625  623164 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:47:31.951009  623164 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 20:47:31.951144  623164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:47:32.011361  623164 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 20:47:32.00108819 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:47:32.011480  623164 docker.go:319] overlay module found
	I1202 20:47:32.013143  623164 out.go:179] * Using the docker driver based on user configuration
	I1202 20:47:32.014250  623164 start.go:309] selected driver: docker
	I1202 20:47:32.014270  623164 start.go:927] validating driver "docker" against <nil>
	I1202 20:47:32.014283  623164 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:47:32.015834  623164 out.go:203] 
	W1202 20:47:32.016885  623164 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1202 20:47:32.018010  623164 out.go:203] 
	I1202 20:47:30.387896  619411 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:47:30.387945  619411 machine.go:97] duration metric: took 4.141394246s to provisionDockerMachine
	I1202 20:47:30.387964  619411 start.go:293] postStartSetup for "NoKubernetes-811845" (driver="docker")
	I1202 20:47:30.387976  619411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:47:30.388032  619411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:47:30.388103  619411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-811845
	I1202 20:47:30.411688  619411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/NoKubernetes-811845/id_rsa Username:docker}
	I1202 20:47:30.523613  619411 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:47:30.529750  619411 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:47:30.529775  619411 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:47:30.529788  619411 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 20:47:30.529854  619411 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 20:47:30.529944  619411 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem -> 4110322.pem in /etc/ssl/certs
	I1202 20:47:30.530126  619411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:47:30.539695  619411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:47:30.562565  619411 start.go:296] duration metric: took 174.586868ms for postStartSetup
	I1202 20:47:30.562655  619411 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:47:30.562695  619411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-811845
	I1202 20:47:30.591389  619411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/NoKubernetes-811845/id_rsa Username:docker}
	I1202 20:47:30.704236  619411 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:47:30.710336  619411 fix.go:56] duration metric: took 4.823827565s for fixHost
	I1202 20:47:30.710356  619411 start.go:83] releasing machines lock for "NoKubernetes-811845", held for 4.823867051s
	I1202 20:47:30.710414  619411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-811845
	I1202 20:47:30.743291  619411 ssh_runner.go:195] Run: cat /version.json
	I1202 20:47:30.743352  619411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:47:30.743375  619411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-811845
	I1202 20:47:30.743463  619411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-811845
	I1202 20:47:30.768939  619411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/NoKubernetes-811845/id_rsa Username:docker}
	I1202 20:47:30.774027  619411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/NoKubernetes-811845/id_rsa Username:docker}
	I1202 20:47:30.872915  619411 ssh_runner.go:195] Run: systemctl --version
	I1202 20:47:30.943442  619411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:47:30.981146  619411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:47:30.986721  619411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:47:30.986789  619411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:47:30.996884  619411 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 20:47:30.996899  619411 start.go:496] detecting cgroup driver to use...
	I1202 20:47:30.996964  619411 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 20:47:30.997003  619411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:47:31.013620  619411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:47:31.029873  619411 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:47:31.029948  619411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:47:31.048621  619411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:47:31.065571  619411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:47:31.160371  619411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:47:31.253148  619411 docker.go:234] disabling docker service ...
	I1202 20:47:31.253234  619411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:47:31.268898  619411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:47:31.283090  619411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:47:31.369321  619411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:47:31.470084  619411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:47:31.487031  619411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:47:31.502857  619411 download.go:108] Downloading: https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm.sha1 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/linux/amd64/v0.0.0/kubeadm
	I1202 20:47:32.091207  619411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1202 20:47:32.091259  619411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:32.102152  619411 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 20:47:32.102212  619411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:32.111653  619411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:32.122534  619411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:32.133388  619411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:47:32.142615  619411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:47:32.151236  619411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:47:32.159918  619411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:47:32.256579  619411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:47:32.409058  619411 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:47:32.409191  619411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:47:32.414637  619411 start.go:564] Will wait 60s for crictl version
	I1202 20:47:32.414692  619411 ssh_runner.go:195] Run: which crictl
	I1202 20:47:32.419287  619411 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:47:32.450646  619411 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:47:32.450747  619411 ssh_runner.go:195] Run: crio --version
	I1202 20:47:32.481836  619411 ssh_runner.go:195] Run: crio --version
	I1202 20:47:32.517650  619411 out.go:179] * Preparing CRI-O 1.34.2 ...
	I1202 20:47:32.519237  619411 ssh_runner.go:195] Run: rm -f paused
	I1202 20:47:32.524718  619411 out.go:179] * Done! minikube is ready without Kubernetes!
	I1202 20:47:32.527409  619411 out.go:203] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	
	
	==> CRI-O <==
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.072544957Z" level=info msg="RDT not available in the host system"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.07255457Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.073413745Z" level=info msg="Conmon does support the --sync option"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.073433162Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.073446947Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.07423415Z" level=info msg="Conmon does support the --sync option"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.074253868Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.078594058Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.078622515Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.079390918Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.079938736Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.08000555Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.184744821Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-w92qd Namespace:kube-system ID:6c3d967b27b7f06f6d171c8def754307d88c748adfb2831a0fc462e8ae9b3d37 UID:0cb44388-0b22-4297-9ab4-151f169fe011 NetNS:/var/run/netns/db9b6b8d-8c50-42a7-afec-53553a423c11 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00082e2b8}] Aliases:map[]}"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.185020376Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-w92qd for CNI network kindnet (type=ptp)"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.185658078Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.185690212Z" level=info msg="Starting seccomp notifier watcher"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.185742733Z" level=info msg="Create NRI interface"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.185865315Z" level=info msg="built-in NRI default validator is disabled"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.185875859Z" level=info msg="runtime interface created"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.185890104Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.185898987Z" level=info msg="runtime interface starting up..."
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.185906268Z" level=info msg="starting plugins..."
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.185921754Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 20:47:27 pause-796891 crio[2159]: time="2025-12-02T20:47:27.186675746Z" level=info msg="No systemd watchdog enabled"
	Dec 02 20:47:27 pause-796891 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	505fac029aa8e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago      Running             coredns                   0                   6c3d967b27b7f       coredns-66bc5c9577-w92qd               kube-system
	793ea7c3faa0f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   24 seconds ago      Running             kindnet-cni               0                   08311a30b99c8       kindnet-vc9rd                          kube-system
	a1c9c0e1a152d       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   24 seconds ago      Running             kube-proxy                0                   52209ee70b167       kube-proxy-xkrx5                       kube-system
	9318d85c2a399       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   37 seconds ago      Running             kube-scheduler            0                   f62bb4ebe62c5       kube-scheduler-pause-796891            kube-system
	23ddd07bcb5f5       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   37 seconds ago      Running             kube-controller-manager   0                   0429cbc6bc472       kube-controller-manager-pause-796891   kube-system
	960f6856e1cc1       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   37 seconds ago      Running             kube-apiserver            0                   f4cdfd1798bc1       kube-apiserver-pause-796891            kube-system
	c11a8e42979ff       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   37 seconds ago      Running             etcd                      0                   728c6d8e8976f       etcd-pause-796891                      kube-system
	
	
	==> coredns [505fac029aa8ebe69ad93ef916db5bd1916f697eabe9a95ab19a1ef2cf11f065] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34758 - 18108 "HINFO IN 2593030726178042316.6447336225444708699. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01897556s
	
	
	==> describe nodes <==
	Name:               pause-796891
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-796891
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=pause-796891
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T20_47_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 20:47:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-796891
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:47:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:47:25 +0000   Tue, 02 Dec 2025 20:46:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:47:25 +0000   Tue, 02 Dec 2025 20:46:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:47:25 +0000   Tue, 02 Dec 2025 20:46:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:47:25 +0000   Tue, 02 Dec 2025 20:47:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-796891
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                fd3db284-124c-47b4-9667-a25865819ac7
	  Boot ID:                    9dd5456f-e394-4b4b-9458-48d26faf507e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-w92qd                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-796891                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-vc9rd                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-pause-796891             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-pause-796891    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-xkrx5                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-pause-796891             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s (x8 over 40s)  kubelet          Node pause-796891 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s (x8 over 40s)  kubelet          Node pause-796891 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s (x8 over 40s)  kubelet          Node pause-796891 status is now: NodeHasSufficientPID
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node pause-796891 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node pause-796891 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node pause-796891 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node pause-796891 event: Registered Node pause-796891 in Controller
	  Normal  NodeReady                14s                kubelet          Node pause-796891 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e f4 c0 f2 56 fb 08 06
	[  +0.000355] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 95 9a 02 fc fb 08 06
	[Dec 2 19:57] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000013] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +1.020139] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +1.023921] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +1.023940] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +1.023881] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +2.047855] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +4.031797] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[  +8.191553] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[Dec 2 19:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[ +32.254238] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	
	
	==> etcd [c11a8e42979ff1d43624fe1ca5a69467905d817d98358acd034602c84ec2d6c5] <==
	{"level":"warn","ts":"2025-12-02T20:47:02.117112Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.792651ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597460941008902 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/view\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/view\" value_size:673 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-12-02T20:47:02.117186Z","caller":"traceutil/trace.go:172","msg":"trace[1512788607] transaction","detail":"{read_only:false; response_revision:88; number_of_response:1; }","duration":"253.692652ms","start":"2025-12-02T20:47:01.863482Z","end":"2025-12-02T20:47:02.117174Z","steps":["trace[1512788607] 'process raft request'  (duration: 126.762258ms)","trace[1512788607] 'compare'  (duration: 126.661129ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T20:47:02.117131Z","caller":"traceutil/trace.go:172","msg":"trace[935880625] range","detail":"{range_begin:/registry/events/default/pause-796891.187d80f458bff806; range_end:; response_count:1; response_revision:87; }","duration":"252.942402ms","start":"2025-12-02T20:47:01.864172Z","end":"2025-12-02T20:47:02.117115Z","steps":["trace[935880625] 'agreement among raft nodes before linearized reading'  (duration: 126.115098ms)","trace[935880625] 'range keys from in-memory index tree'  (duration: 126.590521ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T20:47:02.576596Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"255.399054ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597460941008911 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:aggregate-to-edit\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:aggregate-to-edit\" value_size:2065 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-12-02T20:47:02.576668Z","caller":"traceutil/trace.go:172","msg":"trace[219182840] transaction","detail":"{read_only:false; response_revision:92; number_of_response:1; }","duration":"382.89349ms","start":"2025-12-02T20:47:02.193764Z","end":"2025-12-02T20:47:02.576658Z","steps":["trace[219182840] 'process raft request'  (duration: 127.393118ms)","trace[219182840] 'compare'  (duration: 255.299282ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T20:47:02.576699Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-02T20:47:02.193747Z","time spent":"382.941738ms","remote":"127.0.0.1:37502","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2120,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/clusterroles/system:aggregate-to-edit\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:aggregate-to-edit\" value_size:2065 >> failure:<>"}
	{"level":"info","ts":"2025-12-02T20:47:02.579263Z","caller":"traceutil/trace.go:172","msg":"trace[509960738] linearizableReadLoop","detail":"{readStateIndex:97; appliedIndex:97; }","duration":"163.172703ms","start":"2025-12-02T20:47:02.416060Z","end":"2025-12-02T20:47:02.579233Z","steps":["trace[509960738] 'read index received'  (duration: 163.16773ms)","trace[509960738] 'applied index is now lower than readState.Index'  (duration: 3.824µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T20:47:02.579359Z","caller":"traceutil/trace.go:172","msg":"trace[1534031233] transaction","detail":"{read_only:false; response_revision:93; number_of_response:1; }","duration":"385.164378ms","start":"2025-12-02T20:47:02.194186Z","end":"2025-12-02T20:47:02.579350Z","steps":["trace[1534031233] 'process raft request'  (duration: 385.08578ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T20:47:02.579381Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"163.317542ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-02T20:47:02.579406Z","caller":"traceutil/trace.go:172","msg":"trace[809125091] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:93; }","duration":"163.364285ms","start":"2025-12-02T20:47:02.416035Z","end":"2025-12-02T20:47:02.579400Z","steps":["trace[809125091] 'agreement among raft nodes before linearized reading'  (duration: 163.303995ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T20:47:02.579595Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-02T20:47:02.194167Z","time spent":"385.216613ms","remote":"127.0.0.1:36864","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":670,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-796891.187d80f458bfc9fd\" mod_revision:87 > success:<request_put:<key:\"/registry/events/default/pause-796891.187d80f458bfc9fd\" value_size:598 lease:499225424086233048 >> failure:<request_range:<key:\"/registry/events/default/pause-796891.187d80f458bfc9fd\" > >"}
	{"level":"info","ts":"2025-12-02T20:47:02.710353Z","caller":"traceutil/trace.go:172","msg":"trace[1414011434] linearizableReadLoop","detail":"{readStateIndex:98; appliedIndex:98; }","duration":"128.354047ms","start":"2025-12-02T20:47:02.581968Z","end":"2025-12-02T20:47:02.710322Z","steps":["trace[1414011434] 'read index received'  (duration: 128.346573ms)","trace[1414011434] 'applied index is now lower than readState.Index'  (duration: 6.427µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T20:47:02.815371Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"233.383747ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/pause-796891.187d80f458bff806\" limit:1 ","response":"range_response_count:1 size:678"}
	{"level":"info","ts":"2025-12-02T20:47:02.815451Z","caller":"traceutil/trace.go:172","msg":"trace[354733449] range","detail":"{range_begin:/registry/events/default/pause-796891.187d80f458bff806; range_end:; response_count:1; response_revision:93; }","duration":"233.477157ms","start":"2025-12-02T20:47:02.581959Z","end":"2025-12-02T20:47:02.815436Z","steps":["trace[354733449] 'agreement among raft nodes before linearized reading'  (duration: 128.489763ms)","trace[354733449] 'range keys from in-memory index tree'  (duration: 104.773392ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T20:47:02.815399Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.997987ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597460941008916 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:aggregate-to-view\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:aggregate-to-view\" value_size:1962 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-12-02T20:47:02.815584Z","caller":"traceutil/trace.go:172","msg":"trace[106614381] transaction","detail":"{read_only:false; response_revision:94; number_of_response:1; }","duration":"233.796379ms","start":"2025-12-02T20:47:02.581769Z","end":"2025-12-02T20:47:02.815565Z","steps":["trace[106614381] 'process raft request'  (duration: 128.583943ms)","trace[106614381] 'compare'  (duration: 104.885776ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T20:47:02.989675Z","caller":"traceutil/trace.go:172","msg":"trace[1208988174] linearizableReadLoop","detail":"{readStateIndex:101; appliedIndex:101; }","duration":"138.393928ms","start":"2025-12-02T20:47:02.851247Z","end":"2025-12-02T20:47:02.989641Z","steps":["trace[1208988174] 'read index received'  (duration: 138.38224ms)","trace[1208988174] 'applied index is now lower than readState.Index'  (duration: 10.433µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T20:47:03.206132Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"354.86248ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:node\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-02T20:47:03.206204Z","caller":"traceutil/trace.go:172","msg":"trace[1317133690] range","detail":"{range_begin:/registry/clusterroles/system:node; range_end:; response_count:0; response_revision:96; }","duration":"354.949058ms","start":"2025-12-02T20:47:02.851239Z","end":"2025-12-02T20:47:03.206188Z","steps":["trace[1317133690] 'agreement among raft nodes before linearized reading'  (duration: 138.496472ms)","trace[1317133690] 'range keys from in-memory index tree'  (duration: 216.311905ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T20:47:03.206244Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-02T20:47:02.851229Z","time spent":"355.004422ms","remote":"127.0.0.1:37502","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":0,"response size":27,"request content":"key:\"/registry/clusterroles/system:node\" limit:1 "}
	{"level":"warn","ts":"2025-12-02T20:47:03.206316Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"216.563447ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597460941008922 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-796891.187d80f458c014ff\" mod_revision:91 > success:<request_put:<key:\"/registry/events/default/pause-796891.187d80f458c014ff\" value_size:592 lease:499225424086233048 >> failure:<request_range:<key:\"/registry/events/default/pause-796891.187d80f458c014ff\" > >>","response":"size:14"}
	{"level":"info","ts":"2025-12-02T20:47:03.206442Z","caller":"traceutil/trace.go:172","msg":"trace[1179217833] transaction","detail":"{read_only:false; response_revision:98; number_of_response:1; }","duration":"346.152304ms","start":"2025-12-02T20:47:02.860280Z","end":"2025-12-02T20:47:03.206432Z","steps":["trace[1179217833] 'process raft request'  (duration: 346.095279ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T20:47:03.206490Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-02T20:47:02.860259Z","time spent":"346.207036ms","remote":"127.0.0.1:37106","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5823,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-796891\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-796891\" value_size:5752 >> failure:<>"}
	{"level":"info","ts":"2025-12-02T20:47:03.206463Z","caller":"traceutil/trace.go:172","msg":"trace[228947112] transaction","detail":"{read_only:false; response_revision:97; number_of_response:1; }","duration":"355.377284ms","start":"2025-12-02T20:47:02.851061Z","end":"2025-12-02T20:47:03.206438Z","steps":["trace[228947112] 'process raft request'  (duration: 138.638687ms)","trace[228947112] 'compare'  (duration: 216.463207ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T20:47:03.207305Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-02T20:47:02.851038Z","time spent":"356.076959ms","remote":"127.0.0.1:36864","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":664,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-796891.187d80f458c014ff\" mod_revision:91 > success:<request_put:<key:\"/registry/events/default/pause-796891.187d80f458c014ff\" value_size:592 lease:499225424086233048 >> failure:<request_range:<key:\"/registry/events/default/pause-796891.187d80f458c014ff\" > >"}
	
	
	==> kernel <==
	 20:47:35 up  2:29,  0 user,  load average: 3.75, 1.80, 1.35
	Linux pause-796891 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [793ea7c3faa0f3cd934cc0b58d2de439c2b3bfb6db801cfd46e1ed4c7ddca010] <==
	I1202 20:47:10.577835       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 20:47:10.578945       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1202 20:47:10.579214       1 main.go:148] setting mtu 1500 for CNI 
	I1202 20:47:10.579241       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 20:47:10.579263       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T20:47:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 20:47:10.974660       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 20:47:10.975034       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 20:47:10.984061       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 20:47:10.985136       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 20:47:11.085638       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 20:47:11.086284       1 metrics.go:72] Registering metrics
	I1202 20:47:11.086372       1 controller.go:711] "Syncing nftables rules"
	I1202 20:47:20.796174       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 20:47:20.796244       1 main.go:301] handling current node
	I1202 20:47:30.796188       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 20:47:30.796222       1 main.go:301] handling current node
	
	
	==> kube-apiserver [960f6856e1cc1fc4d6d11314186574243ec408ec9cf17d2adcd795f4de63295e] <==
	I1202 20:47:00.298291       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1202 20:47:00.319808       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 20:47:00.336213       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1202 20:47:00.336783       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:47:00.345658       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1202 20:47:00.345816       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1202 20:47:00.345921       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:47:00.461197       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 20:47:01.147255       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1202 20:47:01.384345       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1202 20:47:01.384367       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 20:47:03.986171       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 20:47:04.056555       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 20:47:04.145722       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 20:47:04.235793       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1202 20:47:04.255740       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1202 20:47:04.258984       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 20:47:04.265314       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 20:47:05.249180       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 20:47:05.260048       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1202 20:47:05.270383       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 20:47:09.842900       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:47:09.846740       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:47:09.890568       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1202 20:47:10.157411       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [23ddd07bcb5f5b56d13f1e4f94e80d95f00c7337ca84530879e15c87231fe5a1] <==
	I1202 20:47:09.137270       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1202 20:47:09.137432       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1202 20:47:09.137449       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1202 20:47:09.137491       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1202 20:47:09.137574       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 20:47:09.137692       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1202 20:47:09.137762       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1202 20:47:09.137892       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1202 20:47:09.138138       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 20:47:09.138201       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1202 20:47:09.138232       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1202 20:47:09.141097       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 20:47:09.142230       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1202 20:47:09.143187       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1202 20:47:09.143275       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1202 20:47:09.143326       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1202 20:47:09.143332       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1202 20:47:09.143340       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1202 20:47:09.145254       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 20:47:09.147407       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1202 20:47:09.149653       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1202 20:47:09.150848       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1202 20:47:09.153267       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-796891" podCIDRs=["10.244.0.0/24"]
	I1202 20:47:09.164346       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 20:47:24.091289       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a1c9c0e1a152d0e55298e0b7dbd3ffe1cf2959ae52fbb82cfa7914940cd2e07f] <==
	I1202 20:47:10.356220       1 server_linux.go:53] "Using iptables proxy"
	I1202 20:47:10.425240       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 20:47:10.525337       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 20:47:10.525378       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1202 20:47:10.525467       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 20:47:10.553178       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 20:47:10.553243       1 server_linux.go:132] "Using iptables Proxier"
	I1202 20:47:10.559458       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 20:47:10.559814       1 server.go:527] "Version info" version="v1.34.2"
	I1202 20:47:10.559844       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:47:10.561408       1 config.go:200] "Starting service config controller"
	I1202 20:47:10.561433       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 20:47:10.561412       1 config.go:106] "Starting endpoint slice config controller"
	I1202 20:47:10.561463       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 20:47:10.561490       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 20:47:10.561497       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 20:47:10.562012       1 config.go:309] "Starting node config controller"
	I1202 20:47:10.562029       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 20:47:10.562036       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 20:47:10.662062       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 20:47:10.662132       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 20:47:10.662137       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9318d85c2a3992238185b0b65f90b7afce8f42bd6fd6934e4563b4bfc16b05a8] <==
	E1202 20:47:01.127568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 20:47:01.150144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1202 20:47:01.196365       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 20:47:01.264603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 20:47:01.322554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 20:47:01.367132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 20:47:01.371498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 20:47:01.385266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 20:47:01.467538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 20:47:01.505118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 20:47:01.515426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 20:47:01.516318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 20:47:01.536415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 20:47:01.758917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 20:47:01.771212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 20:47:01.805733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 20:47:01.807860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 20:47:03.083358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 20:47:03.241910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 20:47:03.346376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 20:47:03.494007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 20:47:03.577572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 20:47:03.695315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 20:47:03.965881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1202 20:47:08.866491       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 20:47:06 pause-796891 kubelet[1296]: E1202 20:47:06.237275    1296 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-796891\" already exists" pod="kube-system/kube-apiserver-pause-796891"
	Dec 02 20:47:06 pause-796891 kubelet[1296]: I1202 20:47:06.245497    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-796891" podStartSLOduration=4.245471856 podStartE2EDuration="4.245471856s" podCreationTimestamp="2025-12-02 20:47:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:47:06.243558205 +0000 UTC m=+1.173508374" watchObservedRunningTime="2025-12-02 20:47:06.245471856 +0000 UTC m=+1.175422018"
	Dec 02 20:47:06 pause-796891 kubelet[1296]: I1202 20:47:06.257948    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-796891" podStartSLOduration=1.257925176 podStartE2EDuration="1.257925176s" podCreationTimestamp="2025-12-02 20:47:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:47:06.256906872 +0000 UTC m=+1.186857034" watchObservedRunningTime="2025-12-02 20:47:06.257925176 +0000 UTC m=+1.187875344"
	Dec 02 20:47:06 pause-796891 kubelet[1296]: I1202 20:47:06.285899    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-796891" podStartSLOduration=1.285872263 podStartE2EDuration="1.285872263s" podCreationTimestamp="2025-12-02 20:47:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:47:06.271056962 +0000 UTC m=+1.201007152" watchObservedRunningTime="2025-12-02 20:47:06.285872263 +0000 UTC m=+1.215822430"
	Dec 02 20:47:06 pause-796891 kubelet[1296]: I1202 20:47:06.286101    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-796891" podStartSLOduration=1.2860546130000001 podStartE2EDuration="1.286054613s" podCreationTimestamp="2025-12-02 20:47:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:47:06.284543136 +0000 UTC m=+1.214493303" watchObservedRunningTime="2025-12-02 20:47:06.286054613 +0000 UTC m=+1.216004782"
	Dec 02 20:47:09 pause-796891 kubelet[1296]: I1202 20:47:09.212483    1296 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 02 20:47:09 pause-796891 kubelet[1296]: I1202 20:47:09.213647    1296 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 02 20:47:09 pause-796891 kubelet[1296]: I1202 20:47:09.996246    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qqg5\" (UniqueName: \"kubernetes.io/projected/9ad656d0-4a36-414b-9a25-f870fddf8d50-kube-api-access-9qqg5\") pod \"kube-proxy-xkrx5\" (UID: \"9ad656d0-4a36-414b-9a25-f870fddf8d50\") " pod="kube-system/kube-proxy-xkrx5"
	Dec 02 20:47:09 pause-796891 kubelet[1296]: I1202 20:47:09.996302    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcth7\" (UniqueName: \"kubernetes.io/projected/48eea110-a9bf-4374-bef9-b48a882a20c4-kube-api-access-qcth7\") pod \"kindnet-vc9rd\" (UID: \"48eea110-a9bf-4374-bef9-b48a882a20c4\") " pod="kube-system/kindnet-vc9rd"
	Dec 02 20:47:09 pause-796891 kubelet[1296]: I1202 20:47:09.996335    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9ad656d0-4a36-414b-9a25-f870fddf8d50-kube-proxy\") pod \"kube-proxy-xkrx5\" (UID: \"9ad656d0-4a36-414b-9a25-f870fddf8d50\") " pod="kube-system/kube-proxy-xkrx5"
	Dec 02 20:47:09 pause-796891 kubelet[1296]: I1202 20:47:09.996425    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ad656d0-4a36-414b-9a25-f870fddf8d50-xtables-lock\") pod \"kube-proxy-xkrx5\" (UID: \"9ad656d0-4a36-414b-9a25-f870fddf8d50\") " pod="kube-system/kube-proxy-xkrx5"
	Dec 02 20:47:09 pause-796891 kubelet[1296]: I1202 20:47:09.996493    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ad656d0-4a36-414b-9a25-f870fddf8d50-lib-modules\") pod \"kube-proxy-xkrx5\" (UID: \"9ad656d0-4a36-414b-9a25-f870fddf8d50\") " pod="kube-system/kube-proxy-xkrx5"
	Dec 02 20:47:09 pause-796891 kubelet[1296]: I1202 20:47:09.996522    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48eea110-a9bf-4374-bef9-b48a882a20c4-lib-modules\") pod \"kindnet-vc9rd\" (UID: \"48eea110-a9bf-4374-bef9-b48a882a20c4\") " pod="kube-system/kindnet-vc9rd"
	Dec 02 20:47:09 pause-796891 kubelet[1296]: I1202 20:47:09.996620    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/48eea110-a9bf-4374-bef9-b48a882a20c4-cni-cfg\") pod \"kindnet-vc9rd\" (UID: \"48eea110-a9bf-4374-bef9-b48a882a20c4\") " pod="kube-system/kindnet-vc9rd"
	Dec 02 20:47:09 pause-796891 kubelet[1296]: I1202 20:47:09.996671    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48eea110-a9bf-4374-bef9-b48a882a20c4-xtables-lock\") pod \"kindnet-vc9rd\" (UID: \"48eea110-a9bf-4374-bef9-b48a882a20c4\") " pod="kube-system/kindnet-vc9rd"
	Dec 02 20:47:11 pause-796891 kubelet[1296]: I1202 20:47:11.256734    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-vc9rd" podStartSLOduration=2.256706066 podStartE2EDuration="2.256706066s" podCreationTimestamp="2025-12-02 20:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:47:11.242062869 +0000 UTC m=+6.172013036" watchObservedRunningTime="2025-12-02 20:47:11.256706066 +0000 UTC m=+6.186656240"
	Dec 02 20:47:11 pause-796891 kubelet[1296]: I1202 20:47:11.275871    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xkrx5" podStartSLOduration=2.275847085 podStartE2EDuration="2.275847085s" podCreationTimestamp="2025-12-02 20:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:47:11.275484603 +0000 UTC m=+6.205434769" watchObservedRunningTime="2025-12-02 20:47:11.275847085 +0000 UTC m=+6.205797252"
	Dec 02 20:47:21 pause-796891 kubelet[1296]: I1202 20:47:21.213339    1296 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 02 20:47:21 pause-796891 kubelet[1296]: I1202 20:47:21.284185    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cb44388-0b22-4297-9ab4-151f169fe011-config-volume\") pod \"coredns-66bc5c9577-w92qd\" (UID: \"0cb44388-0b22-4297-9ab4-151f169fe011\") " pod="kube-system/coredns-66bc5c9577-w92qd"
	Dec 02 20:47:21 pause-796891 kubelet[1296]: I1202 20:47:21.284268    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjbc9\" (UniqueName: \"kubernetes.io/projected/0cb44388-0b22-4297-9ab4-151f169fe011-kube-api-access-wjbc9\") pod \"coredns-66bc5c9577-w92qd\" (UID: \"0cb44388-0b22-4297-9ab4-151f169fe011\") " pod="kube-system/coredns-66bc5c9577-w92qd"
	Dec 02 20:47:22 pause-796891 kubelet[1296]: I1202 20:47:22.281979    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-w92qd" podStartSLOduration=12.281955168 podStartE2EDuration="12.281955168s" podCreationTimestamp="2025-12-02 20:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:47:22.270392246 +0000 UTC m=+17.200342437" watchObservedRunningTime="2025-12-02 20:47:22.281955168 +0000 UTC m=+17.211905335"
	Dec 02 20:47:30 pause-796891 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 20:47:30 pause-796891 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 20:47:30 pause-796891 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 20:47:30 pause-796891 systemd[1]: kubelet.service: Consumed 1.259s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-796891 -n pause-796891
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-796891 -n pause-796891: exit status 2 (386.417553ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-796891 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-992336 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-992336 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (294.419505ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:54:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-992336 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-992336 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-992336 describe deploy/metrics-server -n kube-system: exit status 1 (69.661618ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-992336 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-992336
helpers_test.go:243: (dbg) docker inspect old-k8s-version-992336:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "51827f72c809ddc8017131ab6c734b0218cc31e7e1a61761312a7eb2f62a2f62",
	        "Created": "2025-12-02T20:53:31.91066414Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 717696,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T20:53:31.96117678Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/51827f72c809ddc8017131ab6c734b0218cc31e7e1a61761312a7eb2f62a2f62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/51827f72c809ddc8017131ab6c734b0218cc31e7e1a61761312a7eb2f62a2f62/hostname",
	        "HostsPath": "/var/lib/docker/containers/51827f72c809ddc8017131ab6c734b0218cc31e7e1a61761312a7eb2f62a2f62/hosts",
	        "LogPath": "/var/lib/docker/containers/51827f72c809ddc8017131ab6c734b0218cc31e7e1a61761312a7eb2f62a2f62/51827f72c809ddc8017131ab6c734b0218cc31e7e1a61761312a7eb2f62a2f62-json.log",
	        "Name": "/old-k8s-version-992336",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-992336:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-992336",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "51827f72c809ddc8017131ab6c734b0218cc31e7e1a61761312a7eb2f62a2f62",
	                "LowerDir": "/var/lib/docker/overlay2/7c0073ae68bbddb0c31d7b4a3575e90065e1d78fb046473d890be499fbc620c1-init/diff:/var/lib/docker/overlay2/49bb44e987885c48600d5ae7ad3c81cad82a00f38070c0460882c5746d9fae59/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c0073ae68bbddb0c31d7b4a3575e90065e1d78fb046473d890be499fbc620c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c0073ae68bbddb0c31d7b4a3575e90065e1d78fb046473d890be499fbc620c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c0073ae68bbddb0c31d7b4a3575e90065e1d78fb046473d890be499fbc620c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-992336",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-992336/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-992336",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-992336",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-992336",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7bc7e565226c88e8ed196cac76f7146a92b34fca7c81a0942c388a3d687d00b3",
	            "SandboxKey": "/var/run/docker/netns/7bc7e565226c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-992336": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "65ab470fa0e2676960773427a71fe76968e07b9da2ef303b86ef95d30a18b6c4",
	                    "EndpointID": "2434a1955045842d29c6dbacc2a78374d79c33588796b0179e2d02bff83c11b4",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "9a:2e:0a:ab:a6:cc",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-992336",
	                        "51827f72c809"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-992336 -n old-k8s-version-992336
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-992336 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-992336 logs -n 25: (2.158924929s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                     ARGS                                                                     │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p flannel-775392 sudo systemctl cat kubelet --no-pager                                                                                      │ flannel-775392         │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p flannel-775392 sudo journalctl -xeu kubelet --all --full --no-pager                                                                       │ flannel-775392         │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 pgrep -a kubelet                                                                                                            │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p flannel-775392 sudo cat /etc/kubernetes/kubelet.conf                                                                                      │ flannel-775392         │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p flannel-775392 sudo cat /var/lib/kubelet/config.yaml                                                                                      │ flannel-775392         │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p flannel-775392 sudo systemctl status docker --all --full --no-pager                                                                       │ flannel-775392         │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ ssh     │ -p flannel-775392 sudo systemctl cat docker --no-pager                                                                                       │ flannel-775392         │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p flannel-775392 sudo cat /etc/docker/daemon.json                                                                                           │ flannel-775392         │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ ssh     │ -p flannel-775392 sudo docker system info                                                                                                    │ flannel-775392         │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ ssh     │ -p flannel-775392 sudo systemctl status cri-docker --all --full --no-pager                                                                   │ flannel-775392         │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ ssh     │ -p flannel-775392 sudo systemctl cat cri-docker --no-pager                                                                                   │ flannel-775392         │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p flannel-775392 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                              │ flannel-775392         │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ ssh     │ -p flannel-775392 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                        │ flannel-775392         │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p flannel-775392 sudo cri-dockerd --version                                                                                                 │ flannel-775392         │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p flannel-775392 sudo systemctl status containerd --all --full --no-pager                                                                   │ flannel-775392         │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ ssh     │ -p flannel-775392 sudo systemctl cat containerd --no-pager                                                                                   │ flannel-775392         │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p flannel-775392 sudo cat /lib/systemd/system/containerd.service                                                                            │ flannel-775392         │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p flannel-775392 sudo cat /etc/containerd/config.toml                                                                                       │ flannel-775392         │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p flannel-775392 sudo containerd config dump                                                                                                │ flannel-775392         │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p flannel-775392 sudo systemctl status crio --all --full --no-pager                                                                         │ flannel-775392         │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p flannel-775392 sudo systemctl cat crio --no-pager                                                                                         │ flannel-775392         │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p flannel-775392 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                               │ flannel-775392         │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p flannel-775392 sudo crio config                                                                                                           │ flannel-775392         │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ delete  │ -p flannel-775392                                                                                                                            │ flannel-775392         │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-992336 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ old-k8s-version-992336 │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:54:13
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:54:13.345482  727677 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:54:13.345606  727677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:54:13.345614  727677 out.go:374] Setting ErrFile to fd 2...
	I1202 20:54:13.345618  727677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:54:13.345840  727677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:54:13.346342  727677 out.go:368] Setting JSON to false
	I1202 20:54:13.347614  727677 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9397,"bootTime":1764699456,"procs":346,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:54:13.347682  727677 start.go:143] virtualization: kvm guest
	I1202 20:54:13.350210  727677 out.go:179] * [no-preload-336331] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 20:54:13.351789  727677 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:54:13.351789  727677 notify.go:221] Checking for updates...
	I1202 20:54:13.354989  727677 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:54:13.356380  727677 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:54:13.357777  727677 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 20:54:13.359365  727677 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:54:13.360785  727677 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:54:13.363009  727677 config.go:182] Loaded profile config "bridge-775392": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:54:13.363160  727677 config.go:182] Loaded profile config "flannel-775392": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:54:13.363321  727677 config.go:182] Loaded profile config "old-k8s-version-992336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1202 20:54:13.363462  727677 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:54:13.394294  727677 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 20:54:13.394471  727677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:54:13.460832  727677 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-02 20:54:13.449364016 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:54:13.460975  727677 docker.go:319] overlay module found
	I1202 20:54:13.462981  727677 out.go:179] * Using the docker driver based on user configuration
	I1202 20:54:13.465225  727677 start.go:309] selected driver: docker
	I1202 20:54:13.465249  727677 start.go:927] validating driver "docker" against <nil>
	I1202 20:54:13.465268  727677 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:54:13.466116  727677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:54:13.530426  727677 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-02 20:54:13.519886448 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:54:13.530598  727677 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 20:54:13.530838  727677 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:54:13.532775  727677 out.go:179] * Using Docker driver with root privileges
	I1202 20:54:13.534278  727677 cni.go:84] Creating CNI manager for ""
	I1202 20:54:13.534348  727677 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:54:13.534362  727677 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 20:54:13.534437  727677 start.go:353] cluster config:
	{Name:no-preload-336331 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-336331 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:54:13.536043  727677 out.go:179] * Starting "no-preload-336331" primary control-plane node in "no-preload-336331" cluster
	I1202 20:54:13.537418  727677 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 20:54:13.538798  727677 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 20:54:13.540116  727677 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 20:54:13.540203  727677 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 20:54:13.540261  727677 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/no-preload-336331/config.json ...
	I1202 20:54:13.540304  727677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/no-preload-336331/config.json: {Name:mk59bac27682f67d02186858b422be13d4215c0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:54:13.540411  727677 cache.go:107] acquiring lock: {Name:mk911a7415c1db6121866a16aaa8d547d8fc27e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:13.540450  727677 cache.go:107] acquiring lock: {Name:mk01b60fbf34196e8795139c06a53061b5bbef1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:13.540411  727677 cache.go:107] acquiring lock: {Name:mk5eb5d2ea906db41607942a8f8093a266b381cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:13.540524  727677 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 20:54:13.540543  727677 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 20:54:13.540546  727677 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 97.961µs
	I1202 20:54:13.540524  727677 cache.go:107] acquiring lock: {Name:mk1ce3ec6c8a0a78faf5ccb0bb487dc5a506ffff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:13.540564  727677 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 20:54:13.540555  727677 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 155.783µs
	I1202 20:54:13.540575  727677 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 20:54:13.540532  727677 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 20:54:13.540583  727677 cache.go:107] acquiring lock: {Name:mkf03491d08646dc0a2273e6c20a49756d4e1761 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:13.540614  727677 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1202 20:54:13.540606  727677 cache.go:107] acquiring lock: {Name:mkda13332b8e3f844bd42c29502a9c7671b1ad3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:13.540622  727677 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 20:54:13.540631  727677 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 108.633µs
	I1202 20:54:13.540635  727677 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 55.433µs
	I1202 20:54:13.540641  727677 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 20:54:13.540643  727677 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 20:54:13.540476  727677 cache.go:107] acquiring lock: {Name:mk4453b54b86b3689d0543734fa82feede2f4f33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:13.540614  727677 cache.go:107] acquiring lock: {Name:mk8c99492104b5abf1d260aa0432b08c059c9259 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:13.540591  727677 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 196.843µs
	I1202 20:54:13.540696  727677 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 20:54:13.540727  727677 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 20:54:13.540745  727677 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 20:54:13.540748  727677 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 138.679µs
	I1202 20:54:13.540753  727677 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 304.695µs
	I1202 20:54:13.540758  727677 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 20:54:13.540761  727677 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 20:54:13.540790  727677 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 20:54:13.540823  727677 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 272.666µs
	I1202 20:54:13.540850  727677 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 20:54:13.540869  727677 cache.go:87] Successfully saved all images to host disk.
	I1202 20:54:13.566867  727677 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 20:54:13.566895  727677 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 20:54:13.566918  727677 cache.go:243] Successfully downloaded all kic artifacts
	I1202 20:54:13.566957  727677 start.go:360] acquireMachinesLock for no-preload-336331: {Name:mk8bc7d2c702916aad4c913aa227a3dc418a34af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:13.567110  727677 start.go:364] duration metric: took 128.278µs to acquireMachinesLock for "no-preload-336331"
	I1202 20:54:13.567141  727677 start.go:93] Provisioning new machine with config: &{Name:no-preload-336331 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-336331 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:54:13.567252  727677 start.go:125] createHost starting for "" (driver="docker")
	W1202 20:54:12.670857  716692 node_ready.go:57] node "old-k8s-version-992336" has "Ready":"False" status (will retry)
	W1202 20:54:15.171287  716692 node_ready.go:57] node "old-k8s-version-992336" has "Ready":"False" status (will retry)
	I1202 20:54:15.671474  716692 node_ready.go:49] node "old-k8s-version-992336" is "Ready"
	I1202 20:54:15.671511  716692 node_ready.go:38] duration metric: took 14.004776783s for node "old-k8s-version-992336" to be "Ready" ...
	I1202 20:54:15.671530  716692 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:54:15.671591  716692 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:54:15.689410  716692 api_server.go:72] duration metric: took 14.484539927s to wait for apiserver process to appear ...
	I1202 20:54:15.689453  716692 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:54:15.689484  716692 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	W1202 20:54:13.076824  709465 pod_ready.go:104] pod "coredns-66bc5c9577-rhmml" is not "Ready", error: <nil>
	W1202 20:54:15.077563  709465 pod_ready.go:104] pod "coredns-66bc5c9577-rhmml" is not "Ready", error: <nil>
	I1202 20:54:13.569385  727677 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1202 20:54:13.569606  727677 start.go:159] libmachine.API.Create for "no-preload-336331" (driver="docker")
	I1202 20:54:13.569642  727677 client.go:173] LocalClient.Create starting
	I1202 20:54:13.569729  727677 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem
	I1202 20:54:13.569764  727677 main.go:143] libmachine: Decoding PEM data...
	I1202 20:54:13.569778  727677 main.go:143] libmachine: Parsing certificate...
	I1202 20:54:13.569844  727677 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem
	I1202 20:54:13.569873  727677 main.go:143] libmachine: Decoding PEM data...
	I1202 20:54:13.569887  727677 main.go:143] libmachine: Parsing certificate...
	I1202 20:54:13.570266  727677 cli_runner.go:164] Run: docker network inspect no-preload-336331 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 20:54:13.591388  727677 cli_runner.go:211] docker network inspect no-preload-336331 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 20:54:13.591513  727677 network_create.go:284] running [docker network inspect no-preload-336331] to gather additional debugging logs...
	I1202 20:54:13.591547  727677 cli_runner.go:164] Run: docker network inspect no-preload-336331
	W1202 20:54:13.614108  727677 cli_runner.go:211] docker network inspect no-preload-336331 returned with exit code 1
	I1202 20:54:13.614145  727677 network_create.go:287] error running [docker network inspect no-preload-336331]: docker network inspect no-preload-336331: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-336331 not found
	I1202 20:54:13.614179  727677 network_create.go:289] output of [docker network inspect no-preload-336331]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-336331 not found
	
	** /stderr **
	I1202 20:54:13.614348  727677 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:54:13.639405  727677 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-acf081edf266 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:04:c0:60:47:62} reservation:<nil>}
	I1202 20:54:13.640659  727677 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9623a21fb225 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:fc:8b:40:15:1b} reservation:<nil>}
	I1202 20:54:13.641418  727677 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f2b79e7e26a5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:c7:f4:38:1c:32} reservation:<nil>}
	I1202 20:54:13.642641  727677 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001face20}
	I1202 20:54:13.642711  727677 network_create.go:124] attempt to create docker network no-preload-336331 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1202 20:54:13.642788  727677 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-336331 no-preload-336331
	I1202 20:54:13.698385  727677 network_create.go:108] docker network no-preload-336331 192.168.76.0/24 created
	I1202 20:54:13.698421  727677 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-336331" container
	I1202 20:54:13.698495  727677 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 20:54:13.717985  727677 cli_runner.go:164] Run: docker volume create no-preload-336331 --label name.minikube.sigs.k8s.io=no-preload-336331 --label created_by.minikube.sigs.k8s.io=true
	I1202 20:54:13.737997  727677 oci.go:103] Successfully created a docker volume no-preload-336331
	I1202 20:54:13.738084  727677 cli_runner.go:164] Run: docker run --rm --name no-preload-336331-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-336331 --entrypoint /usr/bin/test -v no-preload-336331:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 20:54:14.150979  727677 oci.go:107] Successfully prepared a docker volume no-preload-336331
	I1202 20:54:14.151221  727677 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1202 20:54:14.151349  727677 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1202 20:54:14.151403  727677 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1202 20:54:14.151467  727677 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 20:54:14.218178  727677 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-336331 --name no-preload-336331 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-336331 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-336331 --network no-preload-336331 --ip 192.168.76.2 --volume no-preload-336331:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 20:54:14.536876  727677 cli_runner.go:164] Run: docker container inspect no-preload-336331 --format={{.State.Running}}
	I1202 20:54:14.560225  727677 cli_runner.go:164] Run: docker container inspect no-preload-336331 --format={{.State.Status}}
	I1202 20:54:14.582805  727677 cli_runner.go:164] Run: docker exec no-preload-336331 stat /var/lib/dpkg/alternatives/iptables
	I1202 20:54:14.636653  727677 oci.go:144] the created container "no-preload-336331" has a running status.
	I1202 20:54:14.636691  727677 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/no-preload-336331/id_rsa...
	I1202 20:54:14.924056  727677 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-407427/.minikube/machines/no-preload-336331/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 20:54:14.956745  727677 cli_runner.go:164] Run: docker container inspect no-preload-336331 --format={{.State.Status}}
	I1202 20:54:14.984200  727677 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 20:54:14.984224  727677 kic_runner.go:114] Args: [docker exec --privileged no-preload-336331 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 20:54:15.038478  727677 cli_runner.go:164] Run: docker container inspect no-preload-336331 --format={{.State.Status}}
	I1202 20:54:15.060744  727677 machine.go:94] provisionDockerMachine start ...
	I1202 20:54:15.060839  727677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-336331
	I1202 20:54:15.085162  727677 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:15.085464  727677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1202 20:54:15.085486  727677 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:54:15.241651  727677 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-336331
	
	I1202 20:54:15.241684  727677 ubuntu.go:182] provisioning hostname "no-preload-336331"
	I1202 20:54:15.241750  727677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-336331
	I1202 20:54:15.264628  727677 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:15.264896  727677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1202 20:54:15.264916  727677 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-336331 && echo "no-preload-336331" | sudo tee /etc/hostname
	I1202 20:54:15.427535  727677 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-336331
	
	I1202 20:54:15.427627  727677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-336331
	I1202 20:54:15.448902  727677 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:15.449252  727677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1202 20:54:15.449285  727677 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-336331' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-336331/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-336331' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:54:15.598098  727677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:54:15.598142  727677 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-407427/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-407427/.minikube}
	I1202 20:54:15.598181  727677 ubuntu.go:190] setting up certificates
	I1202 20:54:15.598212  727677 provision.go:84] configureAuth start
	I1202 20:54:15.598283  727677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-336331
	I1202 20:54:15.619737  727677 provision.go:143] copyHostCerts
	I1202 20:54:15.619799  727677 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem, removing ...
	I1202 20:54:15.619808  727677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem
	I1202 20:54:15.619875  727677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem (1082 bytes)
	I1202 20:54:15.619991  727677 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem, removing ...
	I1202 20:54:15.620002  727677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem
	I1202 20:54:15.620032  727677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem (1123 bytes)
	I1202 20:54:15.620153  727677 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem, removing ...
	I1202 20:54:15.620169  727677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem
	I1202 20:54:15.620199  727677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem (1675 bytes)
	I1202 20:54:15.620272  727677 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem org=jenkins.no-preload-336331 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-336331]
	I1202 20:54:15.718200  727677 provision.go:177] copyRemoteCerts
	I1202 20:54:15.718261  727677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:54:15.718306  727677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-336331
	I1202 20:54:15.746528  727677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/no-preload-336331/id_rsa Username:docker}
	I1202 20:54:15.852697  727677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:54:15.876868  727677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 20:54:15.897745  727677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 20:54:15.920274  727677 provision.go:87] duration metric: took 322.03978ms to configureAuth
	I1202 20:54:15.920309  727677 ubuntu.go:206] setting minikube options for container-runtime
	I1202 20:54:15.920515  727677 config.go:182] Loaded profile config "no-preload-336331": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:54:15.920662  727677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-336331
	I1202 20:54:15.942986  727677 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:15.943311  727677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1202 20:54:15.943338  727677 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:54:16.264732  727677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:54:16.264784  727677 machine.go:97] duration metric: took 1.204018723s to provisionDockerMachine
	I1202 20:54:16.264798  727677 client.go:176] duration metric: took 2.695146231s to LocalClient.Create
	I1202 20:54:16.264822  727677 start.go:167] duration metric: took 2.695217191s to libmachine.API.Create "no-preload-336331"
	I1202 20:54:16.264831  727677 start.go:293] postStartSetup for "no-preload-336331" (driver="docker")
	I1202 20:54:16.264848  727677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:54:16.264923  727677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:54:16.264973  727677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-336331
	I1202 20:54:16.288856  727677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/no-preload-336331/id_rsa Username:docker}
	I1202 20:54:16.398934  727677 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:54:16.403906  727677 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:54:16.403938  727677 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:54:16.403950  727677 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 20:54:16.404025  727677 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 20:54:16.404170  727677 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem -> 4110322.pem in /etc/ssl/certs
	I1202 20:54:16.404309  727677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:54:16.416139  727677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:54:16.451936  727677 start.go:296] duration metric: took 187.089044ms for postStartSetup
	I1202 20:54:16.452432  727677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-336331
	I1202 20:54:16.473969  727677 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/no-preload-336331/config.json ...
	I1202 20:54:16.474312  727677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:54:16.474359  727677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-336331
	I1202 20:54:16.501324  727677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/no-preload-336331/id_rsa Username:docker}
	I1202 20:54:16.607062  727677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:54:16.612196  727677 start.go:128] duration metric: took 3.044923456s to createHost
	I1202 20:54:16.612224  727677 start.go:83] releasing machines lock for "no-preload-336331", held for 3.045101517s
	I1202 20:54:16.612302  727677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-336331
	I1202 20:54:16.633617  727677 ssh_runner.go:195] Run: cat /version.json
	I1202 20:54:16.633630  727677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:54:16.633671  727677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-336331
	I1202 20:54:16.633700  727677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-336331
	I1202 20:54:16.654241  727677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/no-preload-336331/id_rsa Username:docker}
	I1202 20:54:16.654516  727677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/no-preload-336331/id_rsa Username:docker}
	I1202 20:54:16.753153  727677 ssh_runner.go:195] Run: systemctl --version
	I1202 20:54:16.817337  727677 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:54:16.862951  727677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:54:16.869014  727677 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:54:16.869100  727677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:54:16.900968  727677 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 20:54:16.901009  727677 start.go:496] detecting cgroup driver to use...
	I1202 20:54:16.901045  727677 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 20:54:16.901121  727677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:54:16.918688  727677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:54:16.932812  727677 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:54:16.932874  727677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:54:16.952320  727677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:54:16.973491  727677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:54:17.073693  727677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:54:17.184141  727677 docker.go:234] disabling docker service ...
	I1202 20:54:17.184214  727677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:54:17.208964  727677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:54:17.226477  727677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:54:17.321464  727677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:54:17.419435  727677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:54:17.435228  727677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:54:17.452182  727677 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:54:17.452258  727677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:17.464821  727677 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 20:54:17.464891  727677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:17.476688  727677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:17.489281  727677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:17.503821  727677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:54:17.515247  727677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:17.526822  727677 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:17.543151  727677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:17.554579  727677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:54:17.565409  727677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:54:17.576221  727677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:54:17.665247  727677 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:54:17.815745  727677 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:54:17.815832  727677 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:54:17.820885  727677 start.go:564] Will wait 60s for crictl version
	I1202 20:54:17.820998  727677 ssh_runner.go:195] Run: which crictl
	I1202 20:54:17.824858  727677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:54:17.854758  727677 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:54:17.854847  727677 ssh_runner.go:195] Run: crio --version
	I1202 20:54:17.889714  727677 ssh_runner.go:195] Run: crio --version
	I1202 20:54:17.926937  727677 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 20:54:15.698633  716692 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1202 20:54:15.700599  716692 api_server.go:141] control plane version: v1.28.0
	I1202 20:54:15.700632  716692 api_server.go:131] duration metric: took 11.167755ms to wait for apiserver health ...
	I1202 20:54:15.700643  716692 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:54:15.708601  716692 system_pods.go:59] 8 kube-system pods found
	I1202 20:54:15.708650  716692 system_pods.go:61] "coredns-5dd5756b68-ptzsf" [14b9d2d2-4853-419f-ad27-5d6f4c9c7e2c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:54:15.708658  716692 system_pods.go:61] "etcd-old-k8s-version-992336" [22527607-8153-442e-97cb-93555cbcdd3a] Running
	I1202 20:54:15.708663  716692 system_pods.go:61] "kindnet-jvmsp" [51a76a82-d4d0-4909-a7a7-49ad2e3fd9f0] Running
	I1202 20:54:15.708669  716692 system_pods.go:61] "kube-apiserver-old-k8s-version-992336" [5049999c-2987-49b7-ba74-9d7621b0759a] Running
	I1202 20:54:15.708682  716692 system_pods.go:61] "kube-controller-manager-old-k8s-version-992336" [34f637f6-d1c4-4620-9705-439b4db0805a] Running
	I1202 20:54:15.708687  716692 system_pods.go:61] "kube-proxy-qpzt8" [e7130e4a-3fd7-49ba-b6c6-ea6857c76765] Running
	I1202 20:54:15.708692  716692 system_pods.go:61] "kube-scheduler-old-k8s-version-992336" [c4e33a26-6df9-440c-9eff-9197bcdfd55c] Running
	I1202 20:54:15.708699  716692 system_pods.go:61] "storage-provisioner" [398f9134-7016-4782-9541-255e9925dd8d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:54:15.708715  716692 system_pods.go:74] duration metric: took 8.064644ms to wait for pod list to return data ...
	I1202 20:54:15.708725  716692 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:54:15.711535  716692 default_sa.go:45] found service account: "default"
	I1202 20:54:15.711561  716692 default_sa.go:55] duration metric: took 2.828698ms for default service account to be created ...
	I1202 20:54:15.711573  716692 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 20:54:15.717996  716692 system_pods.go:86] 8 kube-system pods found
	I1202 20:54:15.718093  716692 system_pods.go:89] "coredns-5dd5756b68-ptzsf" [14b9d2d2-4853-419f-ad27-5d6f4c9c7e2c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:54:15.718116  716692 system_pods.go:89] "etcd-old-k8s-version-992336" [22527607-8153-442e-97cb-93555cbcdd3a] Running
	I1202 20:54:15.718138  716692 system_pods.go:89] "kindnet-jvmsp" [51a76a82-d4d0-4909-a7a7-49ad2e3fd9f0] Running
	I1202 20:54:15.718161  716692 system_pods.go:89] "kube-apiserver-old-k8s-version-992336" [5049999c-2987-49b7-ba74-9d7621b0759a] Running
	I1202 20:54:15.718177  716692 system_pods.go:89] "kube-controller-manager-old-k8s-version-992336" [34f637f6-d1c4-4620-9705-439b4db0805a] Running
	I1202 20:54:15.718191  716692 system_pods.go:89] "kube-proxy-qpzt8" [e7130e4a-3fd7-49ba-b6c6-ea6857c76765] Running
	I1202 20:54:15.718206  716692 system_pods.go:89] "kube-scheduler-old-k8s-version-992336" [c4e33a26-6df9-440c-9eff-9197bcdfd55c] Running
	I1202 20:54:15.718228  716692 system_pods.go:89] "storage-provisioner" [398f9134-7016-4782-9541-255e9925dd8d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:54:15.718281  716692 retry.go:31] will retry after 276.77064ms: missing components: kube-dns
	I1202 20:54:16.000567  716692 system_pods.go:86] 8 kube-system pods found
	I1202 20:54:16.000612  716692 system_pods.go:89] "coredns-5dd5756b68-ptzsf" [14b9d2d2-4853-419f-ad27-5d6f4c9c7e2c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:54:16.000626  716692 system_pods.go:89] "etcd-old-k8s-version-992336" [22527607-8153-442e-97cb-93555cbcdd3a] Running
	I1202 20:54:16.000634  716692 system_pods.go:89] "kindnet-jvmsp" [51a76a82-d4d0-4909-a7a7-49ad2e3fd9f0] Running
	I1202 20:54:16.000639  716692 system_pods.go:89] "kube-apiserver-old-k8s-version-992336" [5049999c-2987-49b7-ba74-9d7621b0759a] Running
	I1202 20:54:16.000645  716692 system_pods.go:89] "kube-controller-manager-old-k8s-version-992336" [34f637f6-d1c4-4620-9705-439b4db0805a] Running
	I1202 20:54:16.000650  716692 system_pods.go:89] "kube-proxy-qpzt8" [e7130e4a-3fd7-49ba-b6c6-ea6857c76765] Running
	I1202 20:54:16.000655  716692 system_pods.go:89] "kube-scheduler-old-k8s-version-992336" [c4e33a26-6df9-440c-9eff-9197bcdfd55c] Running
	I1202 20:54:16.000668  716692 system_pods.go:89] "storage-provisioner" [398f9134-7016-4782-9541-255e9925dd8d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:54:16.000689  716692 retry.go:31] will retry after 364.624402ms: missing components: kube-dns
	I1202 20:54:16.371635  716692 system_pods.go:86] 8 kube-system pods found
	I1202 20:54:16.371677  716692 system_pods.go:89] "coredns-5dd5756b68-ptzsf" [14b9d2d2-4853-419f-ad27-5d6f4c9c7e2c] Running
	I1202 20:54:16.371686  716692 system_pods.go:89] "etcd-old-k8s-version-992336" [22527607-8153-442e-97cb-93555cbcdd3a] Running
	I1202 20:54:16.371692  716692 system_pods.go:89] "kindnet-jvmsp" [51a76a82-d4d0-4909-a7a7-49ad2e3fd9f0] Running
	I1202 20:54:16.371703  716692 system_pods.go:89] "kube-apiserver-old-k8s-version-992336" [5049999c-2987-49b7-ba74-9d7621b0759a] Running
	I1202 20:54:16.371709  716692 system_pods.go:89] "kube-controller-manager-old-k8s-version-992336" [34f637f6-d1c4-4620-9705-439b4db0805a] Running
	I1202 20:54:16.371714  716692 system_pods.go:89] "kube-proxy-qpzt8" [e7130e4a-3fd7-49ba-b6c6-ea6857c76765] Running
	I1202 20:54:16.371719  716692 system_pods.go:89] "kube-scheduler-old-k8s-version-992336" [c4e33a26-6df9-440c-9eff-9197bcdfd55c] Running
	I1202 20:54:16.371724  716692 system_pods.go:89] "storage-provisioner" [398f9134-7016-4782-9541-255e9925dd8d] Running
	I1202 20:54:16.371739  716692 system_pods.go:126] duration metric: took 660.158192ms to wait for k8s-apps to be running ...
	I1202 20:54:16.371748  716692 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 20:54:16.371807  716692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:54:16.386794  716692 system_svc.go:56] duration metric: took 15.033153ms WaitForService to wait for kubelet
	I1202 20:54:16.386831  716692 kubeadm.go:587] duration metric: took 15.181968401s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:54:16.386856  716692 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:54:16.390326  716692 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 20:54:16.390361  716692 node_conditions.go:123] node cpu capacity is 8
	I1202 20:54:16.390379  716692 node_conditions.go:105] duration metric: took 3.516684ms to run NodePressure ...
	I1202 20:54:16.390396  716692 start.go:242] waiting for startup goroutines ...
	I1202 20:54:16.390406  716692 start.go:247] waiting for cluster config update ...
	I1202 20:54:16.390420  716692 start.go:256] writing updated cluster config ...
	I1202 20:54:16.390789  716692 ssh_runner.go:195] Run: rm -f paused
	I1202 20:54:16.395273  716692 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:54:16.400424  716692 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-ptzsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:16.406227  716692 pod_ready.go:94] pod "coredns-5dd5756b68-ptzsf" is "Ready"
	I1202 20:54:16.406255  716692 pod_ready.go:86] duration metric: took 5.804714ms for pod "coredns-5dd5756b68-ptzsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:16.410468  716692 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-992336" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:16.417301  716692 pod_ready.go:94] pod "etcd-old-k8s-version-992336" is "Ready"
	I1202 20:54:16.417332  716692 pod_ready.go:86] duration metric: took 6.839095ms for pod "etcd-old-k8s-version-992336" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:16.421136  716692 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-992336" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:16.427624  716692 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-992336" is "Ready"
	I1202 20:54:16.427661  716692 pod_ready.go:86] duration metric: took 6.498123ms for pod "kube-apiserver-old-k8s-version-992336" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:16.438373  716692 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-992336" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:16.800320  716692 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-992336" is "Ready"
	I1202 20:54:16.800357  716692 pod_ready.go:86] duration metric: took 361.948942ms for pod "kube-controller-manager-old-k8s-version-992336" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:17.002229  716692 pod_ready.go:83] waiting for pod "kube-proxy-qpzt8" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:17.400280  716692 pod_ready.go:94] pod "kube-proxy-qpzt8" is "Ready"
	I1202 20:54:17.400319  716692 pod_ready.go:86] duration metric: took 398.064856ms for pod "kube-proxy-qpzt8" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:17.600535  716692 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-992336" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:18.000715  716692 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-992336" is "Ready"
	I1202 20:54:18.000748  716692 pod_ready.go:86] duration metric: took 400.185497ms for pod "kube-scheduler-old-k8s-version-992336" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:18.000762  716692 pod_ready.go:40] duration metric: took 1.605445743s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:54:18.054186  716692 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1202 20:54:18.056425  716692 out.go:203] 
	W1202 20:54:18.060048  716692 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1202 20:54:18.061775  716692 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1202 20:54:18.063512  716692 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-992336" cluster and "default" namespace by default
	I1202 20:54:17.928361  727677 cli_runner.go:164] Run: docker network inspect no-preload-336331 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:54:17.951421  727677 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1202 20:54:17.956558  727677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:54:17.968496  727677 kubeadm.go:884] updating cluster {Name:no-preload-336331 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-336331 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:54:17.968633  727677 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 20:54:17.968679  727677 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:54:17.999618  727677 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1202 20:54:17.999648  727677 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1202 20:54:17.999704  727677 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:17.999721  727677 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:17.999953  727677 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1202 20:54:17.999975  727677 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:17.999975  727677 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:17.999953  727677 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:18.000175  727677 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:18.000212  727677 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:18.001156  727677 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:18.001180  727677 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:18.001255  727677 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:18.001327  727677 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:18.001372  727677 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1202 20:54:18.001454  727677 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:18.001663  727677 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:18.002029  727677 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:18.184311  727677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:18.184327  727677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:18.204871  727677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:18.207319  727677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:18.229742  727677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:18.229835  727677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:18.236861  727677 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1202 20:54:18.236923  727677 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:18.236980  727677 ssh_runner.go:195] Run: which crictl
	I1202 20:54:18.237116  727677 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1202 20:54:18.237146  727677 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:18.237183  727677 ssh_runner.go:195] Run: which crictl
	I1202 20:54:18.238676  727677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1202 20:54:18.264364  727677 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1202 20:54:18.264418  727677 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:18.264479  727677 ssh_runner.go:195] Run: which crictl
	I1202 20:54:18.275876  727677 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1202 20:54:18.275936  727677 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:18.275988  727677 ssh_runner.go:195] Run: which crictl
	I1202 20:54:18.297431  727677 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1202 20:54:18.297474  727677 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:18.297503  727677 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1202 20:54:18.297524  727677 ssh_runner.go:195] Run: which crictl
	I1202 20:54:18.297534  727677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:18.297472  727677 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1202 20:54:18.297605  727677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:18.297536  727677 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1202 20:54:18.297710  727677 ssh_runner.go:195] Run: which crictl
	I1202 20:54:18.297736  727677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:18.297602  727677 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:18.297768  727677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:18.297780  727677 ssh_runner.go:195] Run: which crictl
	I1202 20:54:18.302793  727677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:18.334486  727677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:18.335136  727677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:18.335180  727677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 20:54:18.335221  727677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:18.335271  727677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:18.337035  727677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	W1202 20:54:17.576852  709465 pod_ready.go:104] pod "coredns-66bc5c9577-rhmml" is not "Ready", error: <nil>
	I1202 20:54:18.577767  709465 pod_ready.go:94] pod "coredns-66bc5c9577-rhmml" is "Ready"
	I1202 20:54:18.577800  709465 pod_ready.go:86] duration metric: took 28.007244457s for pod "coredns-66bc5c9577-rhmml" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:18.581522  709465 pod_ready.go:83] waiting for pod "etcd-bridge-775392" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:18.587515  709465 pod_ready.go:94] pod "etcd-bridge-775392" is "Ready"
	I1202 20:54:18.587547  709465 pod_ready.go:86] duration metric: took 5.99307ms for pod "etcd-bridge-775392" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:18.591244  709465 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-775392" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:18.598277  709465 pod_ready.go:94] pod "kube-apiserver-bridge-775392" is "Ready"
	I1202 20:54:18.598311  709465 pod_ready.go:86] duration metric: took 7.031931ms for pod "kube-apiserver-bridge-775392" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:18.601543  709465 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-775392" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:18.776994  709465 pod_ready.go:94] pod "kube-controller-manager-bridge-775392" is "Ready"
	I1202 20:54:18.777154  709465 pod_ready.go:86] duration metric: took 175.540914ms for pod "kube-controller-manager-bridge-775392" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:18.975269  709465 pod_ready.go:83] waiting for pod "kube-proxy-27ztb" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:19.376063  709465 pod_ready.go:94] pod "kube-proxy-27ztb" is "Ready"
	I1202 20:54:19.376155  709465 pod_ready.go:86] duration metric: took 400.85748ms for pod "kube-proxy-27ztb" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:19.575051  709465 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-775392" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:19.976783  709465 pod_ready.go:94] pod "kube-scheduler-bridge-775392" is "Ready"
	I1202 20:54:19.976829  709465 pod_ready.go:86] duration metric: took 401.718124ms for pod "kube-scheduler-bridge-775392" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:19.976851  709465 pod_ready.go:40] duration metric: took 39.476259808s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:54:20.047615  709465 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 20:54:20.049914  709465 out.go:179] * Done! kubectl is now configured to use "bridge-775392" cluster and "default" namespace by default
	I1202 20:54:18.376951  727677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:18.377017  727677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:18.377113  727677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:18.377131  727677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 20:54:18.377143  727677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:18.382559  727677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:18.382706  727677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:18.420451  727677 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1202 20:54:18.420556  727677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1202 20:54:18.425799  727677 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1202 20:54:18.425883  727677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:18.425903  727677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1202 20:54:18.426017  727677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:18.426145  727677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 20:54:18.426199  727677 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1202 20:54:18.426279  727677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 20:54:18.430457  727677 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1202 20:54:18.430491  727677 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1202 20:54:18.430498  727677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1202 20:54:18.430701  727677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 20:54:18.490101  727677 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1202 20:54:18.490126  727677 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1202 20:54:18.490141  727677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1202 20:54:18.490163  727677 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1202 20:54:18.490189  727677 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1202 20:54:18.490264  727677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 20:54:18.490271  727677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1202 20:54:18.490282  727677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 20:54:18.490288  727677 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1202 20:54:18.490316  727677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1202 20:54:18.490328  727677 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1202 20:54:18.490342  727677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1202 20:54:18.570621  727677 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1202 20:54:18.570646  727677 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1202 20:54:18.570666  727677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1202 20:54:18.570674  727677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1202 20:54:18.570633  727677 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1202 20:54:18.570791  727677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1202 20:54:18.673543  727677 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1202 20:54:18.673637  727677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1202 20:54:19.155904  727677 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1202 20:54:19.155954  727677 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1202 20:54:19.156005  727677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1202 20:54:19.354185  727677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:20.738425  727677 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.384193288s)
	I1202 20:54:20.738419  727677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.582380398s)
	I1202 20:54:20.738579  727677 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1202 20:54:20.738609  727677 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 20:54:20.738547  727677 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1202 20:54:20.738662  727677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 20:54:20.738674  727677 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:20.738720  727677 ssh_runner.go:195] Run: which crictl
	I1202 20:54:20.745212  727677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:22.268809  727677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.530124034s)
	I1202 20:54:22.268850  727677 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1202 20:54:22.268890  727677 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 20:54:22.268900  727677 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.523644459s)
	I1202 20:54:22.268959  727677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:22.268961  727677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 20:54:23.660350  727677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.391360905s)
	I1202 20:54:23.660396  727677 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1202 20:54:23.660431  727677 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1202 20:54:23.660452  727677 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.39146888s)
	I1202 20:54:23.660489  727677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1202 20:54:23.660525  727677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:25.020674  727677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.360154945s)
	I1202 20:54:25.020718  727677 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1202 20:54:25.020739  727677 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 20:54:25.020783  727677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 20:54:25.020691  727677 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.360143576s)
	I1202 20:54:25.020825  727677 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1202 20:54:25.020937  727677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1202 20:54:26.202310  727677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.181498726s)
	I1202 20:54:26.202355  727677 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1202 20:54:26.202377  727677 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.181413485s)
	I1202 20:54:26.202384  727677 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 20:54:26.202400  727677 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1202 20:54:26.202434  727677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1202 20:54:26.202449  727677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	
	
	==> CRI-O <==
	Dec 02 20:54:15 old-k8s-version-992336 crio[771]: time="2025-12-02T20:54:15.704955034Z" level=info msg="Starting container: b1bb5abb4b4af97e7c0157b0bcd7675809088093ed3a83b68ba41901e135d1bf" id=f3da5fc3-fb1d-4f28-aab7-fc3ad4a441a3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:54:15 old-k8s-version-992336 crio[771]: time="2025-12-02T20:54:15.707565795Z" level=info msg="Started container" PID=2119 containerID=b1bb5abb4b4af97e7c0157b0bcd7675809088093ed3a83b68ba41901e135d1bf description=kube-system/coredns-5dd5756b68-ptzsf/coredns id=f3da5fc3-fb1d-4f28-aab7-fc3ad4a441a3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=379455af33111ba73c4a7756c9856a1a52c4fffb249e439e679022f57dfb8737
	Dec 02 20:54:18 old-k8s-version-992336 crio[771]: time="2025-12-02T20:54:18.576202886Z" level=info msg="Running pod sandbox: default/busybox/POD" id=353cb481-e740-4248-b102-215b71d4b24c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 20:54:18 old-k8s-version-992336 crio[771]: time="2025-12-02T20:54:18.576328584Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:54:18 old-k8s-version-992336 crio[771]: time="2025-12-02T20:54:18.583522271Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ce278135952e3d9d1309841ee356740dc48451e95615cfd31a5d309958aebfb4 UID:80960db9-5402-41bc-8354-45cbf0d86346 NetNS:/var/run/netns/50d15457-6c48-4a67-bf67-9b5107593339 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004d63c8}] Aliases:map[]}"
	Dec 02 20:54:18 old-k8s-version-992336 crio[771]: time="2025-12-02T20:54:18.583573266Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 02 20:54:18 old-k8s-version-992336 crio[771]: time="2025-12-02T20:54:18.599594287Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ce278135952e3d9d1309841ee356740dc48451e95615cfd31a5d309958aebfb4 UID:80960db9-5402-41bc-8354-45cbf0d86346 NetNS:/var/run/netns/50d15457-6c48-4a67-bf67-9b5107593339 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004d63c8}] Aliases:map[]}"
	Dec 02 20:54:18 old-k8s-version-992336 crio[771]: time="2025-12-02T20:54:18.599809902Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 02 20:54:18 old-k8s-version-992336 crio[771]: time="2025-12-02T20:54:18.601093126Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 20:54:18 old-k8s-version-992336 crio[771]: time="2025-12-02T20:54:18.603356045Z" level=info msg="Ran pod sandbox ce278135952e3d9d1309841ee356740dc48451e95615cfd31a5d309958aebfb4 with infra container: default/busybox/POD" id=353cb481-e740-4248-b102-215b71d4b24c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 20:54:18 old-k8s-version-992336 crio[771]: time="2025-12-02T20:54:18.606741359Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=783c7eda-b3a3-4b56-87f2-b905aedf1d1c name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:54:18 old-k8s-version-992336 crio[771]: time="2025-12-02T20:54:18.607285584Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=783c7eda-b3a3-4b56-87f2-b905aedf1d1c name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:54:18 old-k8s-version-992336 crio[771]: time="2025-12-02T20:54:18.607401689Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=783c7eda-b3a3-4b56-87f2-b905aedf1d1c name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:54:18 old-k8s-version-992336 crio[771]: time="2025-12-02T20:54:18.609230572Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9adc48b2-66a5-442f-b42e-455e9244aa8b name=/runtime.v1.ImageService/PullImage
	Dec 02 20:54:18 old-k8s-version-992336 crio[771]: time="2025-12-02T20:54:18.611959245Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 02 20:54:20 old-k8s-version-992336 crio[771]: time="2025-12-02T20:54:20.661109629Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=9adc48b2-66a5-442f-b42e-455e9244aa8b name=/runtime.v1.ImageService/PullImage
	Dec 02 20:54:20 old-k8s-version-992336 crio[771]: time="2025-12-02T20:54:20.662708986Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2b962062-5e56-4e5e-9094-b2b692e93a4b name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:54:20 old-k8s-version-992336 crio[771]: time="2025-12-02T20:54:20.664555148Z" level=info msg="Creating container: default/busybox/busybox" id=32ed39d5-9902-453c-ab2b-a6adcee22460 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:54:20 old-k8s-version-992336 crio[771]: time="2025-12-02T20:54:20.664704329Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:54:20 old-k8s-version-992336 crio[771]: time="2025-12-02T20:54:20.669805139Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:54:20 old-k8s-version-992336 crio[771]: time="2025-12-02T20:54:20.670247595Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:54:20 old-k8s-version-992336 crio[771]: time="2025-12-02T20:54:20.699137959Z" level=info msg="Created container 29ff051dcc625c5e2fa97005cdbb10f3655004cd1e6be2b2614b44b5787fa886: default/busybox/busybox" id=32ed39d5-9902-453c-ab2b-a6adcee22460 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:54:20 old-k8s-version-992336 crio[771]: time="2025-12-02T20:54:20.699996819Z" level=info msg="Starting container: 29ff051dcc625c5e2fa97005cdbb10f3655004cd1e6be2b2614b44b5787fa886" id=8f54531c-f1e3-45cb-bb9f-bb38f6cbc5fe name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:54:20 old-k8s-version-992336 crio[771]: time="2025-12-02T20:54:20.702116211Z" level=info msg="Started container" PID=2191 containerID=29ff051dcc625c5e2fa97005cdbb10f3655004cd1e6be2b2614b44b5787fa886 description=default/busybox/busybox id=8f54531c-f1e3-45cb-bb9f-bb38f6cbc5fe name=/runtime.v1.RuntimeService/StartContainer sandboxID=ce278135952e3d9d1309841ee356740dc48451e95615cfd31a5d309958aebfb4
	Dec 02 20:54:28 old-k8s-version-992336 crio[771]: time="2025-12-02T20:54:28.36716136Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	29ff051dcc625       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   ce278135952e3       busybox                                          default
	b1bb5abb4b4af       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      14 seconds ago      Running             coredns                   0                   379455af33111       coredns-5dd5756b68-ptzsf                         kube-system
	0f02028eae826       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 seconds ago      Running             storage-provisioner       0                   9095da8c8cb9e       storage-provisioner                              kube-system
	621c9648fd347       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    25 seconds ago      Running             kindnet-cni               0                   9c6f436a7fde1       kindnet-jvmsp                                    kube-system
	339fa9af03dee       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      27 seconds ago      Running             kube-proxy                0                   71a231fc3ed62       kube-proxy-qpzt8                                 kube-system
	e6cd6c77012e7       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      46 seconds ago      Running             etcd                      0                   28000c72e15ea       etcd-old-k8s-version-992336                      kube-system
	13c3227daaffe       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      46 seconds ago      Running             kube-controller-manager   0                   402b0d87e6d15       kube-controller-manager-old-k8s-version-992336   kube-system
	b48411ee3a8f7       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      46 seconds ago      Running             kube-apiserver            0                   a9fe92a585a0c       kube-apiserver-old-k8s-version-992336            kube-system
	6edb9a9aab29a       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      46 seconds ago      Running             kube-scheduler            0                   ff1b539b1265c       kube-scheduler-old-k8s-version-992336            kube-system
	
	
	==> coredns [b1bb5abb4b4af97e7c0157b0bcd7675809088093ed3a83b68ba41901e135d1bf] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54427 - 65106 "HINFO IN 4000285752932304202.6110926849535322667. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.067483676s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-992336
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-992336
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=old-k8s-version-992336
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T20_53_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 20:53:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-992336
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:54:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:54:20 +0000   Tue, 02 Dec 2025 20:53:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:54:20 +0000   Tue, 02 Dec 2025 20:53:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:54:20 +0000   Tue, 02 Dec 2025 20:53:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:54:20 +0000   Tue, 02 Dec 2025 20:54:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-992336
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                8d62aba3-5101-4346-987f-a9a614755c7a
	  Boot ID:                    9dd5456f-e394-4b4b-9458-48d26faf507e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-ptzsf                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-old-k8s-version-992336                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         41s
	  kube-system                 kindnet-jvmsp                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-992336             250m (3%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-old-k8s-version-992336    200m (2%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-qpzt8                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-992336             100m (1%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 41s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s   kubelet          Node old-k8s-version-992336 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s   kubelet          Node old-k8s-version-992336 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s   kubelet          Node old-k8s-version-992336 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node old-k8s-version-992336 event: Registered Node old-k8s-version-992336 in Controller
	  Normal  NodeReady                15s   kubelet          Node old-k8s-version-992336 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[ +32.254238] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[Dec 2 20:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 03 bd 14 45 8a 08 06
	[  +0.000590] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 96 27 ad 0d 40 04 08 06
	[Dec 2 20:53] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	[  +0.000700] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 e4 ba c0 78 5f 08 06
	[ +10.119645] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000022] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[  +2.447166] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 df 09 53 d6 6e 08 06
	[  +0.000374] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 8d 06 71 0a 5e 08 06
	[Dec 2 20:54] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 12 47 13 50 f6 bc 08 06
	[  +0.001523] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[ +22.123549] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 0d 45 06 42 2a 08 06
	[  +0.000389] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	
	
	==> etcd [e6cd6c77012e7c6625bed144b71d1ffd1143bc79ab593b0ce2f2b2d965c5cdc3] <==
	{"level":"info","ts":"2025-12-02T20:53:43.901508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-12-02T20:53:43.901706Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-12-02T20:53:43.902967Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-02T20:53:43.903179Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-02T20:53:43.90393Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-02T20:53:43.903342Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-02T20:53:43.903399Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-02T20:53:44.7891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-02T20:53:44.789147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-02T20:53:44.789165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2025-12-02T20:53:44.789181Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2025-12-02T20:53:44.789187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-02T20:53:44.789195Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-12-02T20:53:44.789202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-02T20:53:44.790051Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-02T20:53:44.790709Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-992336 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-02T20:53:44.790754Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-02T20:53:44.790855Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-02T20:53:44.790985Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-02T20:53:44.79102Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-02T20:53:44.791033Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-02T20:53:44.791608Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-02T20:53:44.791657Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-02T20:53:44.79233Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-02T20:53:44.792372Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	
	
	==> kernel <==
	 20:54:31 up  2:36,  0 user,  load average: 4.17, 3.56, 2.36
	Linux old-k8s-version-992336 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [621c9648fd347c9ca8329d8bd4d81e6d1d4e90bc12dac7ddaf233083cedea68d] <==
	I1202 20:54:04.844631       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 20:54:04.899857       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1202 20:54:04.900077       1 main.go:148] setting mtu 1500 for CNI 
	I1202 20:54:04.900220       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 20:54:04.900257       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T20:54:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 20:54:05.106779       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 20:54:05.106813       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 20:54:05.106841       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 20:54:05.106970       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 20:54:05.307833       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 20:54:05.307873       1 metrics.go:72] Registering metrics
	I1202 20:54:05.307962       1 controller.go:711] "Syncing nftables rules"
	I1202 20:54:15.114170       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1202 20:54:15.114227       1 main.go:301] handling current node
	I1202 20:54:25.109182       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1202 20:54:25.109224       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b48411ee3a8f7fecea2edaf4c13f4103dbbcdf16ee98c433adc4df5471ca1bcb] <==
	I1202 20:53:45.982258       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1202 20:53:45.982310       1 aggregator.go:166] initial CRD sync complete...
	I1202 20:53:45.982329       1 autoregister_controller.go:141] Starting autoregister controller
	I1202 20:53:45.982335       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 20:53:45.982337       1 shared_informer.go:318] Caches are synced for configmaps
	I1202 20:53:45.982312       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1202 20:53:45.982343       1 cache.go:39] Caches are synced for autoregister controller
	I1202 20:53:45.983266       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1202 20:53:45.983946       1 controller.go:624] quota admission added evaluator for: namespaces
	I1202 20:53:45.994007       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 20:53:46.887047       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1202 20:53:46.890821       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1202 20:53:46.890842       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 20:53:47.355663       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 20:53:47.397134       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 20:53:47.493687       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1202 20:53:47.499573       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1202 20:53:47.500738       1 controller.go:624] quota admission added evaluator for: endpoints
	I1202 20:53:47.505288       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 20:53:47.929720       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1202 20:53:49.106414       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1202 20:53:49.124410       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1202 20:53:49.136205       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1202 20:54:01.114582       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1202 20:54:01.162193       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [13c3227daaffef77da315fa9be228e0a0995185c3eee0e9bd58b5421876fde1b] <==
	I1202 20:54:01.185940       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1202 20:54:01.229929       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1202 20:54:01.279502       1 shared_informer.go:318] Caches are synced for resource quota
	I1202 20:54:01.285761       1 shared_informer.go:318] Caches are synced for resource quota
	I1202 20:54:01.328905       1 shared_informer.go:318] Caches are synced for persistent volume
	I1202 20:54:01.339253       1 shared_informer.go:318] Caches are synced for PV protection
	I1202 20:54:01.365272       1 shared_informer.go:318] Caches are synced for attach detach
	I1202 20:54:01.597950       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-ptzsf"
	I1202 20:54:01.615237       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-hp4qj"
	I1202 20:54:01.637811       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="469.460917ms"
	I1202 20:54:01.650351       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.377388ms"
	I1202 20:54:01.650515       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.038µs"
	I1202 20:54:01.691509       1 shared_informer.go:318] Caches are synced for garbage collector
	I1202 20:54:01.691596       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1202 20:54:01.698096       1 shared_informer.go:318] Caches are synced for garbage collector
	I1202 20:54:01.701433       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1202 20:54:01.717663       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-hp4qj"
	I1202 20:54:01.730214       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="28.827657ms"
	I1202 20:54:01.739522       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.241044ms"
	I1202 20:54:01.739792       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="115.661µs"
	I1202 20:54:15.335136       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="125.024µs"
	I1202 20:54:15.355459       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.439µs"
	I1202 20:54:16.132866       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1202 20:54:16.311112       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.233403ms"
	I1202 20:54:16.311371       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="114.273µs"
	
	
	==> kube-proxy [339fa9af03deec7370c937537ec86559a603f0063bda76f2239da3369e9b5357] <==
	I1202 20:54:02.157667       1 server_others.go:69] "Using iptables proxy"
	I1202 20:54:02.168999       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1202 20:54:02.192902       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 20:54:02.195654       1 server_others.go:152] "Using iptables Proxier"
	I1202 20:54:02.195701       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1202 20:54:02.195713       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1202 20:54:02.195757       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1202 20:54:02.196054       1 server.go:846] "Version info" version="v1.28.0"
	I1202 20:54:02.196111       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:54:02.196836       1 config.go:315] "Starting node config controller"
	I1202 20:54:02.196869       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1202 20:54:02.197086       1 config.go:188] "Starting service config controller"
	I1202 20:54:02.197106       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1202 20:54:02.197131       1 config.go:97] "Starting endpoint slice config controller"
	I1202 20:54:02.197143       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1202 20:54:02.297059       1 shared_informer.go:318] Caches are synced for node config
	I1202 20:54:02.297177       1 shared_informer.go:318] Caches are synced for service config
	I1202 20:54:02.297185       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6edb9a9aab29a896fe1068a1519bc2a994bb1867d7a308c11d490c348282955b] <==
	W1202 20:53:45.954243       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1202 20:53:45.954411       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1202 20:53:45.954704       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1202 20:53:45.954393       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1202 20:53:45.954392       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1202 20:53:45.954791       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1202 20:53:45.954813       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1202 20:53:45.954814       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1202 20:53:45.954834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1202 20:53:45.954397       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1202 20:53:46.915414       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1202 20:53:46.915450       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1202 20:53:46.946940       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1202 20:53:46.946978       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1202 20:53:47.010669       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1202 20:53:47.010709       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1202 20:53:47.038350       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1202 20:53:47.038378       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1202 20:53:47.050961       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1202 20:53:47.051001       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1202 20:53:47.167733       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1202 20:53:47.167773       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1202 20:53:47.179238       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1202 20:53:47.179283       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1202 20:53:47.549982       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 02 20:54:01 old-k8s-version-992336 kubelet[1381]: I1202 20:54:01.237204    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c5bj\" (UniqueName: \"kubernetes.io/projected/51a76a82-d4d0-4909-a7a7-49ad2e3fd9f0-kube-api-access-6c5bj\") pod \"kindnet-jvmsp\" (UID: \"51a76a82-d4d0-4909-a7a7-49ad2e3fd9f0\") " pod="kube-system/kindnet-jvmsp"
	Dec 02 20:54:01 old-k8s-version-992336 kubelet[1381]: I1202 20:54:01.337865    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e7130e4a-3fd7-49ba-b6c6-ea6857c76765-kube-proxy\") pod \"kube-proxy-qpzt8\" (UID: \"e7130e4a-3fd7-49ba-b6c6-ea6857c76765\") " pod="kube-system/kube-proxy-qpzt8"
	Dec 02 20:54:01 old-k8s-version-992336 kubelet[1381]: I1202 20:54:01.337959    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdz5c\" (UniqueName: \"kubernetes.io/projected/e7130e4a-3fd7-49ba-b6c6-ea6857c76765-kube-api-access-rdz5c\") pod \"kube-proxy-qpzt8\" (UID: \"e7130e4a-3fd7-49ba-b6c6-ea6857c76765\") " pod="kube-system/kube-proxy-qpzt8"
	Dec 02 20:54:01 old-k8s-version-992336 kubelet[1381]: I1202 20:54:01.338006    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7130e4a-3fd7-49ba-b6c6-ea6857c76765-xtables-lock\") pod \"kube-proxy-qpzt8\" (UID: \"e7130e4a-3fd7-49ba-b6c6-ea6857c76765\") " pod="kube-system/kube-proxy-qpzt8"
	Dec 02 20:54:01 old-k8s-version-992336 kubelet[1381]: I1202 20:54:01.339124    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7130e4a-3fd7-49ba-b6c6-ea6857c76765-lib-modules\") pod \"kube-proxy-qpzt8\" (UID: \"e7130e4a-3fd7-49ba-b6c6-ea6857c76765\") " pod="kube-system/kube-proxy-qpzt8"
	Dec 02 20:54:01 old-k8s-version-992336 kubelet[1381]: E1202 20:54:01.353400    1381 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 02 20:54:01 old-k8s-version-992336 kubelet[1381]: E1202 20:54:01.353467    1381 projected.go:198] Error preparing data for projected volume kube-api-access-6c5bj for pod kube-system/kindnet-jvmsp: configmap "kube-root-ca.crt" not found
	Dec 02 20:54:01 old-k8s-version-992336 kubelet[1381]: E1202 20:54:01.354119    1381 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/51a76a82-d4d0-4909-a7a7-49ad2e3fd9f0-kube-api-access-6c5bj podName:51a76a82-d4d0-4909-a7a7-49ad2e3fd9f0 nodeName:}" failed. No retries permitted until 2025-12-02 20:54:01.853798535 +0000 UTC m=+12.775623874 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6c5bj" (UniqueName: "kubernetes.io/projected/51a76a82-d4d0-4909-a7a7-49ad2e3fd9f0-kube-api-access-6c5bj") pod "kindnet-jvmsp" (UID: "51a76a82-d4d0-4909-a7a7-49ad2e3fd9f0") : configmap "kube-root-ca.crt" not found
	Dec 02 20:54:01 old-k8s-version-992336 kubelet[1381]: E1202 20:54:01.455779    1381 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 02 20:54:01 old-k8s-version-992336 kubelet[1381]: E1202 20:54:01.455820    1381 projected.go:198] Error preparing data for projected volume kube-api-access-rdz5c for pod kube-system/kube-proxy-qpzt8: configmap "kube-root-ca.crt" not found
	Dec 02 20:54:01 old-k8s-version-992336 kubelet[1381]: E1202 20:54:01.455894    1381 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e7130e4a-3fd7-49ba-b6c6-ea6857c76765-kube-api-access-rdz5c podName:e7130e4a-3fd7-49ba-b6c6-ea6857c76765 nodeName:}" failed. No retries permitted until 2025-12-02 20:54:01.955869886 +0000 UTC m=+12.877695230 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rdz5c" (UniqueName: "kubernetes.io/projected/e7130e4a-3fd7-49ba-b6c6-ea6857c76765-kube-api-access-rdz5c") pod "kube-proxy-qpzt8" (UID: "e7130e4a-3fd7-49ba-b6c6-ea6857c76765") : configmap "kube-root-ca.crt" not found
	Dec 02 20:54:02 old-k8s-version-992336 kubelet[1381]: I1202 20:54:02.249160    1381 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-qpzt8" podStartSLOduration=1.249101014 podCreationTimestamp="2025-12-02 20:54:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:54:02.248953254 +0000 UTC m=+13.170778601" watchObservedRunningTime="2025-12-02 20:54:02.249101014 +0000 UTC m=+13.170926339"
	Dec 02 20:54:15 old-k8s-version-992336 kubelet[1381]: I1202 20:54:15.304770    1381 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 02 20:54:15 old-k8s-version-992336 kubelet[1381]: I1202 20:54:15.334698    1381 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-jvmsp" podStartSLOduration=11.811616622 podCreationTimestamp="2025-12-02 20:54:01 +0000 UTC" firstStartedPulling="2025-12-02 20:54:02.044952548 +0000 UTC m=+12.966777876" lastFinishedPulling="2025-12-02 20:54:04.567976613 +0000 UTC m=+15.489801940" observedRunningTime="2025-12-02 20:54:05.260461473 +0000 UTC m=+16.182286819" watchObservedRunningTime="2025-12-02 20:54:15.334640686 +0000 UTC m=+26.256466031"
	Dec 02 20:54:15 old-k8s-version-992336 kubelet[1381]: I1202 20:54:15.335028    1381 topology_manager.go:215] "Topology Admit Handler" podUID="398f9134-7016-4782-9541-255e9925dd8d" podNamespace="kube-system" podName="storage-provisioner"
	Dec 02 20:54:15 old-k8s-version-992336 kubelet[1381]: I1202 20:54:15.335295    1381 topology_manager.go:215] "Topology Admit Handler" podUID="14b9d2d2-4853-419f-ad27-5d6f4c9c7e2c" podNamespace="kube-system" podName="coredns-5dd5756b68-ptzsf"
	Dec 02 20:54:15 old-k8s-version-992336 kubelet[1381]: I1202 20:54:15.442837    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14b9d2d2-4853-419f-ad27-5d6f4c9c7e2c-config-volume\") pod \"coredns-5dd5756b68-ptzsf\" (UID: \"14b9d2d2-4853-419f-ad27-5d6f4c9c7e2c\") " pod="kube-system/coredns-5dd5756b68-ptzsf"
	Dec 02 20:54:15 old-k8s-version-992336 kubelet[1381]: I1202 20:54:15.442986    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45kpz\" (UniqueName: \"kubernetes.io/projected/398f9134-7016-4782-9541-255e9925dd8d-kube-api-access-45kpz\") pod \"storage-provisioner\" (UID: \"398f9134-7016-4782-9541-255e9925dd8d\") " pod="kube-system/storage-provisioner"
	Dec 02 20:54:15 old-k8s-version-992336 kubelet[1381]: I1202 20:54:15.443112    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxcf2\" (UniqueName: \"kubernetes.io/projected/14b9d2d2-4853-419f-ad27-5d6f4c9c7e2c-kube-api-access-hxcf2\") pod \"coredns-5dd5756b68-ptzsf\" (UID: \"14b9d2d2-4853-419f-ad27-5d6f4c9c7e2c\") " pod="kube-system/coredns-5dd5756b68-ptzsf"
	Dec 02 20:54:15 old-k8s-version-992336 kubelet[1381]: I1202 20:54:15.443156    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/398f9134-7016-4782-9541-255e9925dd8d-tmp\") pod \"storage-provisioner\" (UID: \"398f9134-7016-4782-9541-255e9925dd8d\") " pod="kube-system/storage-provisioner"
	Dec 02 20:54:16 old-k8s-version-992336 kubelet[1381]: I1202 20:54:16.299936    1381 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.299878628 podCreationTimestamp="2025-12-02 20:54:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:54:16.28364145 +0000 UTC m=+27.205466795" watchObservedRunningTime="2025-12-02 20:54:16.299878628 +0000 UTC m=+27.221703996"
	Dec 02 20:54:16 old-k8s-version-992336 kubelet[1381]: I1202 20:54:16.300043    1381 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-ptzsf" podStartSLOduration=15.300018273 podCreationTimestamp="2025-12-02 20:54:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:54:16.299781969 +0000 UTC m=+27.221607316" watchObservedRunningTime="2025-12-02 20:54:16.300018273 +0000 UTC m=+27.221843620"
	Dec 02 20:54:18 old-k8s-version-992336 kubelet[1381]: I1202 20:54:18.273979    1381 topology_manager.go:215] "Topology Admit Handler" podUID="80960db9-5402-41bc-8354-45cbf0d86346" podNamespace="default" podName="busybox"
	Dec 02 20:54:18 old-k8s-version-992336 kubelet[1381]: I1202 20:54:18.366444    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwskk\" (UniqueName: \"kubernetes.io/projected/80960db9-5402-41bc-8354-45cbf0d86346-kube-api-access-xwskk\") pod \"busybox\" (UID: \"80960db9-5402-41bc-8354-45cbf0d86346\") " pod="default/busybox"
	Dec 02 20:54:21 old-k8s-version-992336 kubelet[1381]: I1202 20:54:21.310114    1381 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.256741868 podCreationTimestamp="2025-12-02 20:54:18 +0000 UTC" firstStartedPulling="2025-12-02 20:54:18.608197837 +0000 UTC m=+29.530023183" lastFinishedPulling="2025-12-02 20:54:20.661484278 +0000 UTC m=+31.583309615" observedRunningTime="2025-12-02 20:54:21.30947725 +0000 UTC m=+32.231302596" watchObservedRunningTime="2025-12-02 20:54:21.3100283 +0000 UTC m=+32.231853642"
	
	
	==> storage-provisioner [0f02028eae8267eb75a3f18af53d205c7f76e0bbd850e323fdc18dffca8fddd4] <==
	I1202 20:54:15.715044       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 20:54:15.725345       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 20:54:15.725407       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1202 20:54:15.735181       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 20:54:15.735894       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"17fffbb9-16db-4d60-9564-e341806dca02", APIVersion:"v1", ResourceVersion:"397", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-992336_679a5a62-5bc0-4d7d-8705-8b23de1a2f08 became leader
	I1202 20:54:15.735974       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-992336_679a5a62-5bc0-4d7d-8705-8b23de1a2f08!
	I1202 20:54:15.836090       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-992336_679a5a62-5bc0-4d7d-8705-8b23de1a2f08!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-992336 -n old-k8s-version-992336
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-992336 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-336331 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-336331 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (266.851265ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:55:10Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-336331 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-336331 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-336331 describe deploy/metrics-server -n kube-system: exit status 1 (61.250436ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-336331 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-336331
helpers_test.go:243: (dbg) docker inspect no-preload-336331:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5c0b97280754bca09a214a1209794eaed88bdb20761bc875787e6ff1daeba56e",
	        "Created": "2025-12-02T20:54:14.239653127Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 728459,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T20:54:14.284526419Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/5c0b97280754bca09a214a1209794eaed88bdb20761bc875787e6ff1daeba56e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5c0b97280754bca09a214a1209794eaed88bdb20761bc875787e6ff1daeba56e/hostname",
	        "HostsPath": "/var/lib/docker/containers/5c0b97280754bca09a214a1209794eaed88bdb20761bc875787e6ff1daeba56e/hosts",
	        "LogPath": "/var/lib/docker/containers/5c0b97280754bca09a214a1209794eaed88bdb20761bc875787e6ff1daeba56e/5c0b97280754bca09a214a1209794eaed88bdb20761bc875787e6ff1daeba56e-json.log",
	        "Name": "/no-preload-336331",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-336331:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-336331",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5c0b97280754bca09a214a1209794eaed88bdb20761bc875787e6ff1daeba56e",
	                "LowerDir": "/var/lib/docker/overlay2/594362d957f037a0e8c8f90d32655c29146773c96403d1c3d09c40858d94140a-init/diff:/var/lib/docker/overlay2/49bb44e987885c48600d5ae7ad3c81cad82a00f38070c0460882c5746d9fae59/diff",
	                "MergedDir": "/var/lib/docker/overlay2/594362d957f037a0e8c8f90d32655c29146773c96403d1c3d09c40858d94140a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/594362d957f037a0e8c8f90d32655c29146773c96403d1c3d09c40858d94140a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/594362d957f037a0e8c8f90d32655c29146773c96403d1c3d09c40858d94140a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-336331",
	                "Source": "/var/lib/docker/volumes/no-preload-336331/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-336331",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-336331",
	                "name.minikube.sigs.k8s.io": "no-preload-336331",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a35b793d77bec36a65a4cfc65ba9e196ed400774428baa52ade867e03d05eca1",
	            "SandboxKey": "/var/run/docker/netns/a35b793d77be",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33478"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33479"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33482"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33480"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33481"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-336331": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be4fb772701bc21d00b8604cf864a912ac52112a68f7d1c80495359c23362a1c",
	                    "EndpointID": "aa549c13c1ab6bef1c6b461faa40f616a44b116d95d06f4052a17d62f6cc798f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "96:2a:66:44:60:f5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-336331",
	                        "5c0b97280754"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-336331 -n no-preload-336331
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-336331 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-336331 logs -n 25: (1.192205949s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────
────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────
────────┤
	│ ssh     │ -p bridge-775392 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                                                                                               │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                               │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                                │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ ssh     │ -p bridge-775392 sudo systemctl cat docker --no-pager                                                                                                                                                                                                │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                    │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ ssh     │ -p bridge-775392 sudo docker system info                                                                                                                                                                                                             │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ ssh     │ -p bridge-775392 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                            │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ ssh     │ -p bridge-775392 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                            │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                       │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ ssh     │ -p bridge-775392 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                 │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo cri-dockerd --version                                                                                                                                                                                                          │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                            │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ ssh     │ -p bridge-775392 sudo systemctl cat containerd --no-pager                                                                                                                                                                                            │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                     │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo cat /etc/containerd/config.toml                                                                                                                                                                                                │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo containerd config dump                                                                                                                                                                                                         │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                  │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo systemctl cat crio --no-pager                                                                                                                                                                                                  │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                        │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo crio config                                                                                                                                                                                                                    │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-992336 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-992336 │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ delete  │ -p bridge-775392                                                                                                                                                                                                                                     │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ start   │ -p old-k8s-version-992336 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-992336 │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ start   │ -p newest-cni-245604 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-245604      │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-336331 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-336331      │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────
────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:54:51
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:54:51.248686  744523 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:54:51.248931  744523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:54:51.248939  744523 out.go:374] Setting ErrFile to fd 2...
	I1202 20:54:51.248944  744523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:54:51.249199  744523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:54:51.249701  744523 out.go:368] Setting JSON to false
	I1202 20:54:51.250904  744523 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9435,"bootTime":1764699456,"procs":364,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:54:51.250979  744523 start.go:143] virtualization: kvm guest
	I1202 20:54:51.252790  744523 out.go:179] * [newest-cni-245604] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 20:54:51.253899  744523 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:54:51.253977  744523 notify.go:221] Checking for updates...
	I1202 20:54:51.255724  744523 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:54:51.257813  744523 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:54:51.259113  744523 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 20:54:51.260359  744523 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:54:51.261736  744523 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:54:51.263851  744523 config.go:182] Loaded profile config "default-k8s-diff-port-997805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:54:51.264036  744523 config.go:182] Loaded profile config "no-preload-336331": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:54:51.264195  744523 config.go:182] Loaded profile config "old-k8s-version-992336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1202 20:54:51.264328  744523 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:54:51.291120  744523 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 20:54:51.291259  744523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:54:51.351993  744523 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 20:54:51.3414757 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:54:51.352148  744523 docker.go:319] overlay module found
	I1202 20:54:51.354258  744523 out.go:179] * Using the docker driver based on user configuration
	I1202 20:54:51.355593  744523 start.go:309] selected driver: docker
	I1202 20:54:51.355614  744523 start.go:927] validating driver "docker" against <nil>
	I1202 20:54:51.355627  744523 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:54:51.356356  744523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:54:51.426417  744523 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 20:54:51.413315172 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:54:51.426660  744523 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1202 20:54:51.426715  744523 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1202 20:54:51.427099  744523 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 20:54:51.430750  744523 out.go:179] * Using Docker driver with root privileges
	I1202 20:54:51.432181  744523 cni.go:84] Creating CNI manager for ""
	I1202 20:54:51.432273  744523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:54:51.432289  744523 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 20:54:51.432396  744523 start.go:353] cluster config:
	{Name:newest-cni-245604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-245604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:54:51.433991  744523 out.go:179] * Starting "newest-cni-245604" primary control-plane node in "newest-cni-245604" cluster
	I1202 20:54:51.435712  744523 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 20:54:51.437418  744523 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 20:54:51.438923  744523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 20:54:51.439029  744523 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 20:54:51.471094  744523 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 20:54:51.471120  744523 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 20:54:51.534888  744523 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1202 20:54:51.754467  744523 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1202 20:54:51.754662  744523 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/config.json ...
	I1202 20:54:51.754711  744523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/config.json: {Name:mkdd178ed72e91eb36b68a6cb223fd44f9a5dcff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:54:51.754782  744523 cache.go:107] acquiring lock: {Name:mkf03491d08646dc0a2273e6c20a49756d4e1761 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.754824  744523 cache.go:107] acquiring lock: {Name:mk4453b54b86b3689d0543734fa82feede2f4f33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.754826  744523 cache.go:107] acquiring lock: {Name:mk8c99492104b5abf1d260aa0432b08c059c9259 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.754883  744523 cache.go:107] acquiring lock: {Name:mk5eb5d2ea906db41607942a8f8093a266b381cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.754913  744523 cache.go:107] acquiring lock: {Name:mkda13332b8e3f844bd42c29502a9c7671b1ad3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.754935  744523 cache.go:243] Successfully downloaded all kic artifacts
	I1202 20:54:51.754899  744523 cache.go:107] acquiring lock: {Name:mk01b60fbf34196e8795139c06a53061b5bbef1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.754947  744523 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 20:54:51.754967  744523 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 20:54:51.754900  744523 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 20:54:51.754974  744523 start.go:360] acquireMachinesLock for newest-cni-245604: {Name:mk8ec8505d24ccef2b962d884ea41e40436fd883 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.754980  744523 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 20:54:51.754981  744523 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 69.251µs
	I1202 20:54:51.754990  744523 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 242.138µs
	I1202 20:54:51.754996  744523 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 20:54:51.755004  744523 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 20:54:51.755001  744523 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 20:54:51.754963  744523 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 141.678µs
	I1202 20:54:51.755018  744523 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 20:54:51.754982  744523 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 158.842µs
	I1202 20:54:51.755022  744523 start.go:364] duration metric: took 35.783µs to acquireMachinesLock for "newest-cni-245604"
	I1202 20:54:51.754970  744523 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 20:54:51.755028  744523 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 147.229µs
	I1202 20:54:51.755038  744523 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 20:54:51.755036  744523 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 141.032µs
	I1202 20:54:51.755051  744523 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 20:54:51.754782  744523 cache.go:107] acquiring lock: {Name:mk911a7415c1db6121866a16aaa8d547d8fc27e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.755025  744523 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 20:54:51.754791  744523 cache.go:107] acquiring lock: {Name:mk1ce3ec6c8a0a78faf5ccb0bb487dc5a506ffff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.755107  744523 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 20:54:51.755130  744523 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 351.859µs
	I1202 20:54:51.755151  744523 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 20:54:51.755051  744523 start.go:93] Provisioning new machine with config: &{Name:newest-cni-245604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-245604 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:54:51.755192  744523 start.go:125] createHost starting for "" (driver="docker")
	I1202 20:54:51.755295  744523 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1202 20:54:51.755311  744523 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 531.706µs
	I1202 20:54:51.755333  744523 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 20:54:51.755341  744523 cache.go:87] Successfully saved all images to host disk.
	I1202 20:54:49.807275  736301 out.go:252]   - Booting up control plane ...
	I1202 20:54:49.807399  736301 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 20:54:49.807498  736301 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 20:54:49.807593  736301 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 20:54:49.820733  736301 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 20:54:49.820866  736301 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 20:54:49.828232  736301 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 20:54:49.829367  736301 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 20:54:49.829419  736301 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 20:54:49.939090  736301 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 20:54:49.939273  736301 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 20:54:50.939981  736301 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00098678s
	I1202 20:54:50.943942  736301 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 20:54:50.944097  736301 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1202 20:54:50.944200  736301 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 20:54:50.944356  736301 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1202 20:54:48.889199  727677 node_ready.go:57] node "no-preload-336331" has "Ready":"False" status (will retry)
	W1202 20:54:50.889639  727677 node_ready.go:57] node "no-preload-336331" has "Ready":"False" status (will retry)
	W1202 20:54:52.890301  727677 node_ready.go:57] node "no-preload-336331" has "Ready":"False" status (will retry)
	I1202 20:54:48.917338  743547 out.go:252] * Restarting existing docker container for "old-k8s-version-992336" ...
	I1202 20:54:48.917418  743547 cli_runner.go:164] Run: docker start old-k8s-version-992336
	I1202 20:54:49.233874  743547 cli_runner.go:164] Run: docker container inspect old-k8s-version-992336 --format={{.State.Status}}
	I1202 20:54:49.254208  743547 kic.go:430] container "old-k8s-version-992336" state is running.
	I1202 20:54:49.254576  743547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-992336
	I1202 20:54:49.276197  743547 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336/config.json ...
	I1202 20:54:49.276474  743547 machine.go:94] provisionDockerMachine start ...
	I1202 20:54:49.276556  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:49.295873  743547 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:49.296238  743547 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1202 20:54:49.296255  743547 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:54:49.296917  743547 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36762->127.0.0.1:33488: read: connection reset by peer
	I1202 20:54:52.482289  743547 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-992336
	
	I1202 20:54:52.482326  743547 ubuntu.go:182] provisioning hostname "old-k8s-version-992336"
	I1202 20:54:52.482403  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:52.508620  743547 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:52.509026  743547 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1202 20:54:52.509045  743547 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-992336 && echo "old-k8s-version-992336" | sudo tee /etc/hostname
	I1202 20:54:52.680116  743547 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-992336
	
	I1202 20:54:52.680210  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:52.706295  743547 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:52.706638  743547 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1202 20:54:52.706666  743547 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-992336' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-992336/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-992336' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:54:52.868164  743547 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:54:52.868203  743547 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-407427/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-407427/.minikube}
	I1202 20:54:52.868253  743547 ubuntu.go:190] setting up certificates
	I1202 20:54:52.868266  743547 provision.go:84] configureAuth start
	I1202 20:54:52.868351  743547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-992336
	I1202 20:54:52.896120  743547 provision.go:143] copyHostCerts
	I1202 20:54:52.896189  743547 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem, removing ...
	I1202 20:54:52.896201  743547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem
	I1202 20:54:52.896288  743547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem (1082 bytes)
	I1202 20:54:52.896403  743547 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem, removing ...
	I1202 20:54:52.896415  743547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem
	I1202 20:54:52.896450  743547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem (1123 bytes)
	I1202 20:54:52.896523  743547 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem, removing ...
	I1202 20:54:52.896534  743547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem
	I1202 20:54:52.896565  743547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem (1675 bytes)
	I1202 20:54:52.896627  743547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-992336 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-992336]
	I1202 20:54:53.042224  743547 provision.go:177] copyRemoteCerts
	I1202 20:54:53.042352  743547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:54:53.042421  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:53.066302  743547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/old-k8s-version-992336/id_rsa Username:docker}
	I1202 20:54:53.180785  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:54:53.215027  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1202 20:54:53.249137  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 20:54:53.276327  743547 provision.go:87] duration metric: took 408.04457ms to configureAuth
	I1202 20:54:53.276364  743547 ubuntu.go:206] setting minikube options for container-runtime
	I1202 20:54:53.276661  743547 config.go:182] Loaded profile config "old-k8s-version-992336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1202 20:54:53.276881  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:53.305450  743547 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:53.305788  743547 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1202 20:54:53.305819  743547 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:54:53.745248  743547 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:54:53.745280  743547 machine.go:97] duration metric: took 4.468788993s to provisionDockerMachine
	I1202 20:54:53.745299  743547 start.go:293] postStartSetup for "old-k8s-version-992336" (driver="docker")
	I1202 20:54:53.745313  743547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:54:53.745402  743547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:54:53.745451  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:53.773838  743547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/old-k8s-version-992336/id_rsa Username:docker}
	I1202 20:54:53.877082  743547 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:54:53.881285  743547 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:54:53.881316  743547 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:54:53.881332  743547 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 20:54:53.881412  743547 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 20:54:53.881515  743547 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem -> 4110322.pem in /etc/ssl/certs
	I1202 20:54:53.881673  743547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:54:53.890517  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:54:53.911353  743547 start.go:296] duration metric: took 166.0361ms for postStartSetup
	I1202 20:54:53.911460  743547 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:54:53.911513  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:53.934180  743547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/old-k8s-version-992336/id_rsa Username:docker}
	I1202 20:54:54.034877  743547 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:54:54.040410  743547 fix.go:56] duration metric: took 5.146736871s for fixHost
	I1202 20:54:54.040443  743547 start.go:83] releasing machines lock for "old-k8s-version-992336", held for 5.146795457s
	I1202 20:54:54.040529  743547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-992336
	I1202 20:54:54.060426  743547 ssh_runner.go:195] Run: cat /version.json
	I1202 20:54:54.060485  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:54.060496  743547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:54:54.060573  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:54.082901  743547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/old-k8s-version-992336/id_rsa Username:docker}
	I1202 20:54:54.082948  743547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/old-k8s-version-992336/id_rsa Username:docker}
	I1202 20:54:54.182659  743547 ssh_runner.go:195] Run: systemctl --version
	I1202 20:54:54.241255  743547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:54:54.279690  743547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:54:54.284969  743547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:54:54.285109  743547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:54:54.294313  743547 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 20:54:54.294343  743547 start.go:496] detecting cgroup driver to use...
	I1202 20:54:54.294378  743547 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 20:54:54.294431  743547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:54:54.311476  743547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:54:54.325741  743547 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:54:54.325809  743547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:54:54.342382  743547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:54:54.356905  743547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:54:54.449514  743547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:54:54.540100  743547 docker.go:234] disabling docker service ...
	I1202 20:54:54.540175  743547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:54:54.557954  743547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:54:54.575642  743547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:54:54.677171  743547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:54:54.787938  743547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:54:54.805380  743547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:54:54.824665  743547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1202 20:54:54.824729  743547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:54.837044  743547 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 20:54:54.837142  743547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:54.849210  743547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:54.860907  743547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:54.871629  743547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:54:54.882082  743547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:54.893928  743547 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:54.905219  743547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:54.917032  743547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:54:54.927659  743547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:54:54.938429  743547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:54:55.059022  743547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:54:55.238974  743547 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:54:55.239099  743547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:54:55.245135  743547 start.go:564] Will wait 60s for crictl version
	I1202 20:54:55.245210  743547 ssh_runner.go:195] Run: which crictl
	I1202 20:54:55.250232  743547 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:54:55.282324  743547 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:54:55.282412  743547 ssh_runner.go:195] Run: crio --version
	I1202 20:54:55.320935  743547 ssh_runner.go:195] Run: crio --version
	I1202 20:54:55.361998  743547 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1202 20:54:52.527997  736301 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.583967669s
	I1202 20:54:53.622779  736301 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.678806737s
	I1202 20:54:55.446643  736301 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502546315s
	I1202 20:54:55.467578  736301 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 20:54:55.486539  736301 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 20:54:55.505049  736301 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 20:54:55.505398  736301 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-997805 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 20:54:55.516932  736301 kubeadm.go:319] [bootstrap-token] Using token: clatot.hc48jyk0hvxonz06
	I1202 20:54:51.758445  744523 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1202 20:54:51.758787  744523 start.go:159] libmachine.API.Create for "newest-cni-245604" (driver="docker")
	I1202 20:54:51.758834  744523 client.go:173] LocalClient.Create starting
	I1202 20:54:51.758936  744523 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem
	I1202 20:54:51.759008  744523 main.go:143] libmachine: Decoding PEM data...
	I1202 20:54:51.759032  744523 main.go:143] libmachine: Parsing certificate...
	I1202 20:54:51.759118  744523 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem
	I1202 20:54:51.759148  744523 main.go:143] libmachine: Decoding PEM data...
	I1202 20:54:51.759171  744523 main.go:143] libmachine: Parsing certificate...
	I1202 20:54:51.759637  744523 cli_runner.go:164] Run: docker network inspect newest-cni-245604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 20:54:51.781898  744523 cli_runner.go:211] docker network inspect newest-cni-245604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 20:54:51.781982  744523 network_create.go:284] running [docker network inspect newest-cni-245604] to gather additional debugging logs...
	I1202 20:54:51.782006  744523 cli_runner.go:164] Run: docker network inspect newest-cni-245604
	W1202 20:54:51.801637  744523 cli_runner.go:211] docker network inspect newest-cni-245604 returned with exit code 1
	I1202 20:54:51.801678  744523 network_create.go:287] error running [docker network inspect newest-cni-245604]: docker network inspect newest-cni-245604: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-245604 not found
	I1202 20:54:51.801697  744523 network_create.go:289] output of [docker network inspect newest-cni-245604]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-245604 not found
	
	** /stderr **
	I1202 20:54:51.801890  744523 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:54:51.824870  744523 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-acf081edf266 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:04:c0:60:47:62} reservation:<nil>}
	I1202 20:54:51.825911  744523 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9623a21fb225 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:fc:8b:40:15:1b} reservation:<nil>}
	I1202 20:54:51.826609  744523 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f2b79e7e26a5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:c7:f4:38:1c:32} reservation:<nil>}
	I1202 20:54:51.827584  744523 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-be4fb772701b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:87:5f:38:96:b7} reservation:<nil>}
	I1202 20:54:51.828542  744523 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-13fe483902b9 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:a2:a4:21:b2:62:5a} reservation:<nil>}
	I1202 20:54:51.829195  744523 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-65ab470fa0e2 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:16:23:28:7c:c5:24} reservation:<nil>}
	I1202 20:54:51.830231  744523 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed3d00}
	I1202 20:54:51.830266  744523 network_create.go:124] attempt to create docker network newest-cni-245604 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1202 20:54:51.830316  744523 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-245604 newest-cni-245604
	I1202 20:54:51.887973  744523 network_create.go:108] docker network newest-cni-245604 192.168.103.0/24 created
	I1202 20:54:51.888023  744523 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-245604" container
	I1202 20:54:51.888128  744523 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 20:54:51.909991  744523 cli_runner.go:164] Run: docker volume create newest-cni-245604 --label name.minikube.sigs.k8s.io=newest-cni-245604 --label created_by.minikube.sigs.k8s.io=true
	I1202 20:54:51.933849  744523 oci.go:103] Successfully created a docker volume newest-cni-245604
	I1202 20:54:51.933969  744523 cli_runner.go:164] Run: docker run --rm --name newest-cni-245604-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-245604 --entrypoint /usr/bin/test -v newest-cni-245604:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 20:54:52.386347  744523 oci.go:107] Successfully prepared a docker volume newest-cni-245604
	I1202 20:54:52.386442  744523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1202 20:54:52.386653  744523 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1202 20:54:52.386714  744523 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1202 20:54:52.386763  744523 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 20:54:52.468472  744523 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-245604 --name newest-cni-245604 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-245604 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-245604 --network newest-cni-245604 --ip 192.168.103.2 --volume newest-cni-245604:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 20:54:52.834787  744523 cli_runner.go:164] Run: docker container inspect newest-cni-245604 --format={{.State.Running}}
	I1202 20:54:52.859568  744523 cli_runner.go:164] Run: docker container inspect newest-cni-245604 --format={{.State.Status}}
	I1202 20:54:52.888318  744523 cli_runner.go:164] Run: docker exec newest-cni-245604 stat /var/lib/dpkg/alternatives/iptables
	I1202 20:54:52.947034  744523 oci.go:144] the created container "newest-cni-245604" has a running status.
	I1202 20:54:52.947106  744523 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa...
	I1202 20:54:53.161566  744523 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 20:54:53.197985  744523 cli_runner.go:164] Run: docker container inspect newest-cni-245604 --format={{.State.Status}}
	I1202 20:54:53.229219  744523 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 20:54:53.229249  744523 kic_runner.go:114] Args: [docker exec --privileged newest-cni-245604 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 20:54:53.293954  744523 cli_runner.go:164] Run: docker container inspect newest-cni-245604 --format={{.State.Status}}
	I1202 20:54:53.319791  744523 machine.go:94] provisionDockerMachine start ...
	I1202 20:54:53.319987  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:53.347829  744523 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:53.348214  744523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1202 20:54:53.348237  744523 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:54:53.514601  744523 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-245604
	
	I1202 20:54:53.514632  744523 ubuntu.go:182] provisioning hostname "newest-cni-245604"
	I1202 20:54:53.514706  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:53.543984  744523 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:53.544329  744523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1202 20:54:53.544354  744523 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-245604 && echo "newest-cni-245604" | sudo tee /etc/hostname
	I1202 20:54:53.729217  744523 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-245604
	
	I1202 20:54:53.729302  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:53.755581  744523 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:53.755911  744523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1202 20:54:53.755944  744523 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-245604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-245604/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-245604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:54:53.904745  744523 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:54:53.904773  744523 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-407427/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-407427/.minikube}
	I1202 20:54:53.904818  744523 ubuntu.go:190] setting up certificates
	I1202 20:54:53.904831  744523 provision.go:84] configureAuth start
	I1202 20:54:53.904887  744523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-245604
	I1202 20:54:53.926340  744523 provision.go:143] copyHostCerts
	I1202 20:54:53.926412  744523 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem, removing ...
	I1202 20:54:53.926426  744523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem
	I1202 20:54:53.926508  744523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem (1082 bytes)
	I1202 20:54:53.926637  744523 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem, removing ...
	I1202 20:54:53.926646  744523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem
	I1202 20:54:53.926677  744523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem (1123 bytes)
	I1202 20:54:53.926741  744523 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem, removing ...
	I1202 20:54:53.926749  744523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem
	I1202 20:54:53.926776  744523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem (1675 bytes)
	I1202 20:54:53.926832  744523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem org=jenkins.newest-cni-245604 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-245604]
	I1202 20:54:54.033669  744523 provision.go:177] copyRemoteCerts
	I1202 20:54:54.033748  744523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:54:54.033805  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:54.055356  744523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:54:54.161586  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:54:54.183507  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 20:54:54.203578  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 20:54:54.223521  744523 provision.go:87] duration metric: took 318.655712ms to configureAuth
	I1202 20:54:54.223562  744523 ubuntu.go:206] setting minikube options for container-runtime
	I1202 20:54:54.223787  744523 config.go:182] Loaded profile config "newest-cni-245604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:54:54.223932  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:54.243976  744523 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:54.244266  744523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1202 20:54:54.244285  744523 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:54:54.563270  744523 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:54:54.563301  744523 machine.go:97] duration metric: took 1.243461731s to provisionDockerMachine
	I1202 20:54:54.563315  744523 client.go:176] duration metric: took 2.804467588s to LocalClient.Create
	I1202 20:54:54.563333  744523 start.go:167] duration metric: took 2.804549056s to libmachine.API.Create "newest-cni-245604"
	I1202 20:54:54.563343  744523 start.go:293] postStartSetup for "newest-cni-245604" (driver="docker")
	I1202 20:54:54.563359  744523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:54:54.563434  744523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:54:54.563487  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:54.587633  744523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:54:54.704139  744523 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:54:54.711871  744523 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:54:54.711907  744523 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:54:54.711923  744523 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 20:54:54.711998  744523 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 20:54:54.712158  744523 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem -> 4110322.pem in /etc/ssl/certs
	I1202 20:54:54.712308  744523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:54:54.727333  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:54:54.756096  744523 start.go:296] duration metric: took 192.737221ms for postStartSetup
	I1202 20:54:54.756539  744523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-245604
	I1202 20:54:54.779332  744523 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/config.json ...
	I1202 20:54:54.779682  744523 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:54:54.779734  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:54.804251  744523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:54:54.909217  744523 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:54:54.915212  744523 start.go:128] duration metric: took 3.160001099s to createHost
	I1202 20:54:54.915249  744523 start.go:83] releasing machines lock for "newest-cni-245604", held for 3.160217279s
	I1202 20:54:54.915329  744523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-245604
	I1202 20:54:54.939674  744523 ssh_runner.go:195] Run: cat /version.json
	I1202 20:54:54.939748  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:54.939782  744523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:54:54.939880  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:54.964142  744523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:54:54.965218  744523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:54:55.150195  744523 ssh_runner.go:195] Run: systemctl --version
	I1202 20:54:55.159061  744523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:54:55.203041  744523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:54:55.209011  744523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:54:55.209128  744523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:54:55.242651  744523 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 20:54:55.242680  744523 start.go:496] detecting cgroup driver to use...
	I1202 20:54:55.242718  744523 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 20:54:55.242772  744523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:54:55.265988  744523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:54:55.283822  744523 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:54:55.283891  744523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:54:55.306452  744523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:54:55.330861  744523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:54:55.437811  744523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:54:55.558513  744523 docker.go:234] disabling docker service ...
	I1202 20:54:55.558591  744523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:54:55.580602  744523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:54:55.596697  744523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:54:55.714954  744523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:54:55.820710  744523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:54:55.834948  744523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:54:55.852971  744523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:54:55.853038  744523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:55.866995  744523 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 20:54:55.867101  744523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:55.884788  744523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:55.901200  744523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:55.918342  744523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:54:55.928191  744523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:55.938885  744523 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:55.955266  744523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:55.965380  744523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:54:55.974592  744523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:54:55.983203  744523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:54:56.089565  744523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:54:56.246748  744523 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:54:56.246822  744523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:54:56.251650  744523 start.go:564] Will wait 60s for crictl version
	I1202 20:54:56.251725  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:56.259643  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:54:56.294960  744523 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:54:56.295118  744523 ssh_runner.go:195] Run: crio --version
	I1202 20:54:56.335315  744523 ssh_runner.go:195] Run: crio --version
	I1202 20:54:56.375510  744523 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 20:54:56.376891  744523 cli_runner.go:164] Run: docker network inspect newest-cni-245604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:54:56.404101  744523 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1202 20:54:56.410059  744523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:54:56.428224  744523 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1202 20:54:55.363273  743547 cli_runner.go:164] Run: docker network inspect old-k8s-version-992336 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:54:55.391463  743547 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1202 20:54:55.395875  743547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:54:55.407541  743547 kubeadm.go:884] updating cluster {Name:old-k8s-version-992336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-992336 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:54:55.407687  743547 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1202 20:54:55.407752  743547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:54:55.448888  743547 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:54:55.448914  743547 crio.go:433] Images already preloaded, skipping extraction
	I1202 20:54:55.448981  743547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:54:55.488955  743547 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:54:55.488987  743547 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:54:55.488997  743547 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 crio true true} ...
	I1202 20:54:55.489187  743547 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-992336 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-992336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:54:55.489281  743547 ssh_runner.go:195] Run: crio config
	I1202 20:54:55.555002  743547 cni.go:84] Creating CNI manager for ""
	I1202 20:54:55.555029  743547 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:54:55.555046  743547 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:54:55.555089  743547 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-992336 NodeName:old-k8s-version-992336 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:54:55.555302  743547 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-992336"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:54:55.555391  743547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1202 20:54:55.564702  743547 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:54:55.564796  743547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:54:55.574017  743547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1202 20:54:55.590044  743547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:54:55.607238  743547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1202 20:54:55.624302  743547 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1202 20:54:55.629565  743547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:54:55.647331  743547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:54:55.746705  743547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:54:55.778223  743547 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336 for IP: 192.168.94.2
	I1202 20:54:55.778263  743547 certs.go:195] generating shared ca certs ...
	I1202 20:54:55.778286  743547 certs.go:227] acquiring lock for ca certs: {Name:mkea11924193dd4dc459e16d70207bde383196df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:54:55.778470  743547 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key
	I1202 20:54:55.778540  743547 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key
	I1202 20:54:55.778555  743547 certs.go:257] generating profile certs ...
	I1202 20:54:55.778691  743547 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336/client.key
	I1202 20:54:55.778774  743547 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336/apiserver.key.26e20487
	I1202 20:54:55.778826  743547 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336/proxy-client.key
	I1202 20:54:55.778974  743547 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem (1338 bytes)
	W1202 20:54:55.779023  743547 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032_empty.pem, impossibly tiny 0 bytes
	I1202 20:54:55.779039  743547 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 20:54:55.779165  743547 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:54:55.779217  743547 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:54:55.779265  743547 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem (1675 bytes)
	I1202 20:54:55.779335  743547 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:54:55.780235  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:54:55.803356  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:54:55.826463  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:54:55.847561  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:54:55.875979  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1202 20:54:55.904532  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 20:54:55.931492  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:54:55.951900  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 20:54:55.972640  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:54:55.992667  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem --> /usr/share/ca-certificates/411032.pem (1338 bytes)
	I1202 20:54:56.015555  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /usr/share/ca-certificates/4110322.pem (1708 bytes)
	I1202 20:54:56.042035  743547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:54:56.059891  743547 ssh_runner.go:195] Run: openssl version
	I1202 20:54:56.068335  743547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:54:56.079667  743547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:54:56.085893  743547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:54:56.085977  743547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:54:56.143330  743547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:54:56.156665  743547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/411032.pem && ln -fs /usr/share/ca-certificates/411032.pem /etc/ssl/certs/411032.pem"
	I1202 20:54:56.169457  743547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/411032.pem
	I1202 20:54:56.174154  743547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 20:13 /usr/share/ca-certificates/411032.pem
	I1202 20:54:56.174225  743547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/411032.pem
	I1202 20:54:56.213730  743547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/411032.pem /etc/ssl/certs/51391683.0"
	I1202 20:54:56.223332  743547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110322.pem && ln -fs /usr/share/ca-certificates/4110322.pem /etc/ssl/certs/4110322.pem"
	I1202 20:54:56.233176  743547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110322.pem
	I1202 20:54:56.237408  743547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 20:13 /usr/share/ca-certificates/4110322.pem
	I1202 20:54:56.237477  743547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110322.pem
	I1202 20:54:56.290593  743547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4110322.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:54:56.304474  743547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:54:56.310604  743547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 20:54:56.360515  743547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 20:54:56.413594  743547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 20:54:56.475091  743547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 20:54:56.542472  743547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 20:54:56.584464  743547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 20:54:56.628756  743547 kubeadm.go:401] StartCluster: {Name:old-k8s-version-992336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-992336 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:54:56.628871  743547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:54:56.628955  743547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:54:56.671457  743547 cri.go:89] found id: "b1921b3926c4fba551a94a0ec78b54be832b8754401c93ba491ed82e1b71e6be"
	I1202 20:54:56.671542  743547 cri.go:89] found id: "e1e39d0565d3822bf2f251fdb0e8de5f07938ae3aad30710f3eb435ed8294864"
	I1202 20:54:56.671588  743547 cri.go:89] found id: "b30d0a318021ad78d96505cbec12dab08e463997373813e56adc6e14d585834d"
	I1202 20:54:56.671610  743547 cri.go:89] found id: "670db3462ea1c5beb2d55dfd0859b3df17a3bf33ad117a56693583fcb4ccdd66"
	I1202 20:54:56.671636  743547 cri.go:89] found id: ""
	I1202 20:54:56.671705  743547 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 20:54:56.690130  743547 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:54:56Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:54:56.690230  743547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:54:56.708246  743547 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 20:54:56.708273  743547 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 20:54:56.708319  743547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 20:54:56.720174  743547 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:54:56.721412  743547 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-992336" does not appear in /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:54:56.721919  743547 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-407427/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-992336" cluster setting kubeconfig missing "old-k8s-version-992336" context setting]
	I1202 20:54:56.723060  743547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:54:56.725527  743547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 20:54:56.740149  743547 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1202 20:54:56.740191  743547 kubeadm.go:602] duration metric: took 31.910169ms to restartPrimaryControlPlane
	I1202 20:54:56.740203  743547 kubeadm.go:403] duration metric: took 111.45868ms to StartCluster
	I1202 20:54:56.740224  743547 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:54:56.740303  743547 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:54:56.741496  743547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:54:56.741802  743547 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:54:56.742098  743547 config.go:182] Loaded profile config "old-k8s-version-992336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1202 20:54:56.742170  743547 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:54:56.742263  743547 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-992336"
	I1202 20:54:56.742288  743547 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-992336"
	W1202 20:54:56.742297  743547 addons.go:248] addon storage-provisioner should already be in state true
	I1202 20:54:56.742330  743547 host.go:66] Checking if "old-k8s-version-992336" exists ...
	I1202 20:54:56.742855  743547 cli_runner.go:164] Run: docker container inspect old-k8s-version-992336 --format={{.State.Status}}
	I1202 20:54:56.742984  743547 addons.go:70] Setting dashboard=true in profile "old-k8s-version-992336"
	I1202 20:54:56.743010  743547 addons.go:239] Setting addon dashboard=true in "old-k8s-version-992336"
	W1202 20:54:56.743021  743547 addons.go:248] addon dashboard should already be in state true
	I1202 20:54:56.743017  743547 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-992336"
	I1202 20:54:56.743057  743547 host.go:66] Checking if "old-k8s-version-992336" exists ...
	I1202 20:54:56.743058  743547 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-992336"
	I1202 20:54:56.743415  743547 cli_runner.go:164] Run: docker container inspect old-k8s-version-992336 --format={{.State.Status}}
	I1202 20:54:56.743565  743547 cli_runner.go:164] Run: docker container inspect old-k8s-version-992336 --format={{.State.Status}}
	I1202 20:54:56.747183  743547 out.go:179] * Verifying Kubernetes components...
	I1202 20:54:56.751095  743547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:54:56.779215  743547 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:56.779222  743547 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 20:54:56.780910  743547 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:54:56.780933  743547 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 20:54:56.780934  743547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:54:55.518402  736301 out.go:252]   - Configuring RBAC rules ...
	I1202 20:54:55.518551  736301 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 20:54:55.525177  736301 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 20:54:55.532974  736301 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 20:54:55.536672  736301 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 20:54:55.540648  736301 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 20:54:55.544671  736301 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 20:54:55.854962  736301 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 20:54:56.282748  736301 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 20:54:56.855924  736301 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 20:54:56.858599  736301 kubeadm.go:319] 
	I1202 20:54:56.858728  736301 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 20:54:56.858735  736301 kubeadm.go:319] 
	I1202 20:54:56.858833  736301 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 20:54:56.858838  736301 kubeadm.go:319] 
	I1202 20:54:56.858870  736301 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 20:54:56.858943  736301 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 20:54:56.859016  736301 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 20:54:56.859022  736301 kubeadm.go:319] 
	I1202 20:54:56.859103  736301 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 20:54:56.859109  736301 kubeadm.go:319] 
	I1202 20:54:56.859165  736301 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 20:54:56.859178  736301 kubeadm.go:319] 
	I1202 20:54:56.859235  736301 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 20:54:56.859323  736301 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 20:54:56.859397  736301 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 20:54:56.859403  736301 kubeadm.go:319] 
	I1202 20:54:56.859502  736301 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 20:54:56.859589  736301 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 20:54:56.859596  736301 kubeadm.go:319] 
	I1202 20:54:56.859693  736301 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token clatot.hc48jyk0hvxonz06 \
	I1202 20:54:56.859818  736301 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:48349ffa2369b87cd15480ece56edb1c5c5d97a828930f9dfbf1d1ceccf80ff4 \
	I1202 20:54:56.859842  736301 kubeadm.go:319] 	--control-plane 
	I1202 20:54:56.859847  736301 kubeadm.go:319] 
	I1202 20:54:56.859939  736301 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 20:54:56.859945  736301 kubeadm.go:319] 
	I1202 20:54:56.860051  736301 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token clatot.hc48jyk0hvxonz06 \
	I1202 20:54:56.860179  736301 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:48349ffa2369b87cd15480ece56edb1c5c5d97a828930f9dfbf1d1ceccf80ff4 
	I1202 20:54:56.865687  736301 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1202 20:54:56.865923  736301 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 20:54:56.865962  736301 cni.go:84] Creating CNI manager for ""
	I1202 20:54:56.865975  736301 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:54:56.868615  736301 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1202 20:54:55.389753  727677 node_ready.go:57] node "no-preload-336331" has "Ready":"False" status (will retry)
	W1202 20:54:57.391499  727677 node_ready.go:57] node "no-preload-336331" has "Ready":"False" status (will retry)
	I1202 20:54:57.889990  727677 node_ready.go:49] node "no-preload-336331" is "Ready"
	I1202 20:54:57.890026  727677 node_ready.go:38] duration metric: took 13.504157695s for node "no-preload-336331" to be "Ready" ...
	I1202 20:54:57.890044  727677 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:54:57.890144  727677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:54:57.912775  727677 api_server.go:72] duration metric: took 13.890609716s to wait for apiserver process to appear ...
	I1202 20:54:57.912809  727677 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:54:57.912934  727677 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1202 20:54:57.923648  727677 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1202 20:54:57.925968  727677 api_server.go:141] control plane version: v1.35.0-beta.0
	I1202 20:54:57.926004  727677 api_server.go:131] duration metric: took 13.121364ms to wait for apiserver health ...
	I1202 20:54:57.926015  727677 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:54:57.930714  727677 system_pods.go:59] 8 kube-system pods found
	I1202 20:54:57.930823  727677 system_pods.go:61] "coredns-7d764666f9-ghxk6" [1696ea67-a1db-437c-bada-07c12d4e9fc8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:54:57.930836  727677 system_pods.go:61] "etcd-no-preload-336331" [7e4664de-2a98-4d1e-911f-2cb479f4a42c] Running
	I1202 20:54:57.930844  727677 system_pods.go:61] "kindnet-5blk7" [8fc0ac0d-1c00-4916-99fd-a1b4e6eec75e] Running
	I1202 20:54:57.930851  727677 system_pods.go:61] "kube-apiserver-no-preload-336331" [09086c71-7e4a-40ce-b450-3a3a76d2b092] Running
	I1202 20:54:57.930880  727677 system_pods.go:61] "kube-controller-manager-no-preload-336331" [d556ac70-884a-46d0-aa2d-4fbd065aa125] Running
	I1202 20:54:57.930886  727677 system_pods.go:61] "kube-proxy-qc2v9" [91426b3b-e557-4959-91b3-cb5e256351ac] Running
	I1202 20:54:57.930901  727677 system_pods.go:61] "kube-scheduler-no-preload-336331" [b648b0ee-a3d0-41d2-93b9-fe72216bcec3] Running
	I1202 20:54:57.930910  727677 system_pods.go:61] "storage-provisioner" [e3c38dcd-7f1f-4382-bf82-b09cde780bdb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:54:57.930921  727677 system_pods.go:74] duration metric: took 4.81671ms to wait for pod list to return data ...
	I1202 20:54:57.930933  727677 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:54:57.934602  727677 default_sa.go:45] found service account: "default"
	I1202 20:54:57.934629  727677 default_sa.go:55] duration metric: took 3.687516ms for default service account to be created ...
	I1202 20:54:57.934641  727677 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 20:54:57.939126  727677 system_pods.go:86] 8 kube-system pods found
	I1202 20:54:57.939176  727677 system_pods.go:89] "coredns-7d764666f9-ghxk6" [1696ea67-a1db-437c-bada-07c12d4e9fc8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:54:57.939186  727677 system_pods.go:89] "etcd-no-preload-336331" [7e4664de-2a98-4d1e-911f-2cb479f4a42c] Running
	I1202 20:54:57.939194  727677 system_pods.go:89] "kindnet-5blk7" [8fc0ac0d-1c00-4916-99fd-a1b4e6eec75e] Running
	I1202 20:54:57.939200  727677 system_pods.go:89] "kube-apiserver-no-preload-336331" [09086c71-7e4a-40ce-b450-3a3a76d2b092] Running
	I1202 20:54:57.939207  727677 system_pods.go:89] "kube-controller-manager-no-preload-336331" [d556ac70-884a-46d0-aa2d-4fbd065aa125] Running
	I1202 20:54:57.939212  727677 system_pods.go:89] "kube-proxy-qc2v9" [91426b3b-e557-4959-91b3-cb5e256351ac] Running
	I1202 20:54:57.939217  727677 system_pods.go:89] "kube-scheduler-no-preload-336331" [b648b0ee-a3d0-41d2-93b9-fe72216bcec3] Running
	I1202 20:54:57.939225  727677 system_pods.go:89] "storage-provisioner" [e3c38dcd-7f1f-4382-bf82-b09cde780bdb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:54:57.939256  727677 retry.go:31] will retry after 254.058998ms: missing components: kube-dns
	I1202 20:54:58.199625  727677 system_pods.go:86] 8 kube-system pods found
	I1202 20:54:58.199671  727677 system_pods.go:89] "coredns-7d764666f9-ghxk6" [1696ea67-a1db-437c-bada-07c12d4e9fc8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:54:58.199680  727677 system_pods.go:89] "etcd-no-preload-336331" [7e4664de-2a98-4d1e-911f-2cb479f4a42c] Running
	I1202 20:54:58.199689  727677 system_pods.go:89] "kindnet-5blk7" [8fc0ac0d-1c00-4916-99fd-a1b4e6eec75e] Running
	I1202 20:54:58.199696  727677 system_pods.go:89] "kube-apiserver-no-preload-336331" [09086c71-7e4a-40ce-b450-3a3a76d2b092] Running
	I1202 20:54:58.199703  727677 system_pods.go:89] "kube-controller-manager-no-preload-336331" [d556ac70-884a-46d0-aa2d-4fbd065aa125] Running
	I1202 20:54:58.199708  727677 system_pods.go:89] "kube-proxy-qc2v9" [91426b3b-e557-4959-91b3-cb5e256351ac] Running
	I1202 20:54:58.199713  727677 system_pods.go:89] "kube-scheduler-no-preload-336331" [b648b0ee-a3d0-41d2-93b9-fe72216bcec3] Running
	I1202 20:54:58.199722  727677 system_pods.go:89] "storage-provisioner" [e3c38dcd-7f1f-4382-bf82-b09cde780bdb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:54:58.199742  727677 retry.go:31] will retry after 342.156745ms: missing components: kube-dns
	I1202 20:54:56.780993  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:56.782584  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 20:54:56.782619  743547 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 20:54:56.782691  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:56.784631  743547 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-992336"
	W1202 20:54:56.784664  743547 addons.go:248] addon default-storageclass should already be in state true
	I1202 20:54:56.784697  743547 host.go:66] Checking if "old-k8s-version-992336" exists ...
	I1202 20:54:56.786161  743547 cli_runner.go:164] Run: docker container inspect old-k8s-version-992336 --format={{.State.Status}}
	I1202 20:54:56.831348  743547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/old-k8s-version-992336/id_rsa Username:docker}
	I1202 20:54:56.838761  743547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/old-k8s-version-992336/id_rsa Username:docker}
	I1202 20:54:56.839118  743547 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:54:56.839144  743547 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:54:56.839212  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:56.877157  743547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/old-k8s-version-992336/id_rsa Username:docker}
	I1202 20:54:57.000378  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 20:54:57.000478  743547 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 20:54:57.001473  743547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:54:57.051688  743547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:54:57.053612  743547 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-992336" to be "Ready" ...
	I1202 20:54:57.062772  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 20:54:57.062802  743547 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 20:54:57.099632  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 20:54:57.099665  743547 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 20:54:57.102715  743547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:54:57.128982  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 20:54:57.129013  743547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 20:54:57.151853  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 20:54:57.151871  743547 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 20:54:57.180800  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 20:54:57.180826  743547 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 20:54:57.207394  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 20:54:57.207423  743547 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 20:54:57.238669  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 20:54:57.238701  743547 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 20:54:57.264954  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:54:57.265009  743547 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 20:54:57.288116  743547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:54:59.263131  743547 node_ready.go:49] node "old-k8s-version-992336" is "Ready"
	I1202 20:54:59.263168  743547 node_ready.go:38] duration metric: took 2.209490941s for node "old-k8s-version-992336" to be "Ready" ...
	I1202 20:54:59.263187  743547 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:54:59.263244  743547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:55:00.033214  743547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.981484522s)
	I1202 20:55:00.033304  743547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.93055748s)
	I1202 20:55:00.490811  743547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.202644047s)
	I1202 20:55:00.490986  743547 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.227720068s)
	I1202 20:55:00.491022  743547 api_server.go:72] duration metric: took 3.749188411s to wait for apiserver process to appear ...
	I1202 20:55:00.491030  743547 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:55:00.491062  743547 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1202 20:55:00.493010  743547 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-992336 addons enable metrics-server
	
	I1202 20:55:00.494606  743547 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1202 20:54:58.547286  727677 system_pods.go:86] 8 kube-system pods found
	I1202 20:54:58.547327  727677 system_pods.go:89] "coredns-7d764666f9-ghxk6" [1696ea67-a1db-437c-bada-07c12d4e9fc8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:54:58.547335  727677 system_pods.go:89] "etcd-no-preload-336331" [7e4664de-2a98-4d1e-911f-2cb479f4a42c] Running
	I1202 20:54:58.547344  727677 system_pods.go:89] "kindnet-5blk7" [8fc0ac0d-1c00-4916-99fd-a1b4e6eec75e] Running
	I1202 20:54:58.547349  727677 system_pods.go:89] "kube-apiserver-no-preload-336331" [09086c71-7e4a-40ce-b450-3a3a76d2b092] Running
	I1202 20:54:58.547355  727677 system_pods.go:89] "kube-controller-manager-no-preload-336331" [d556ac70-884a-46d0-aa2d-4fbd065aa125] Running
	I1202 20:54:58.547359  727677 system_pods.go:89] "kube-proxy-qc2v9" [91426b3b-e557-4959-91b3-cb5e256351ac] Running
	I1202 20:54:58.547364  727677 system_pods.go:89] "kube-scheduler-no-preload-336331" [b648b0ee-a3d0-41d2-93b9-fe72216bcec3] Running
	I1202 20:54:58.547371  727677 system_pods.go:89] "storage-provisioner" [e3c38dcd-7f1f-4382-bf82-b09cde780bdb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:54:58.547389  727677 retry.go:31] will retry after 368.951031ms: missing components: kube-dns
	I1202 20:54:58.921450  727677 system_pods.go:86] 8 kube-system pods found
	I1202 20:54:58.921490  727677 system_pods.go:89] "coredns-7d764666f9-ghxk6" [1696ea67-a1db-437c-bada-07c12d4e9fc8] Running
	I1202 20:54:58.921499  727677 system_pods.go:89] "etcd-no-preload-336331" [7e4664de-2a98-4d1e-911f-2cb479f4a42c] Running
	I1202 20:54:58.921505  727677 system_pods.go:89] "kindnet-5blk7" [8fc0ac0d-1c00-4916-99fd-a1b4e6eec75e] Running
	I1202 20:54:58.921510  727677 system_pods.go:89] "kube-apiserver-no-preload-336331" [09086c71-7e4a-40ce-b450-3a3a76d2b092] Running
	I1202 20:54:58.921515  727677 system_pods.go:89] "kube-controller-manager-no-preload-336331" [d556ac70-884a-46d0-aa2d-4fbd065aa125] Running
	I1202 20:54:58.921520  727677 system_pods.go:89] "kube-proxy-qc2v9" [91426b3b-e557-4959-91b3-cb5e256351ac] Running
	I1202 20:54:58.921525  727677 system_pods.go:89] "kube-scheduler-no-preload-336331" [b648b0ee-a3d0-41d2-93b9-fe72216bcec3] Running
	I1202 20:54:58.921530  727677 system_pods.go:89] "storage-provisioner" [e3c38dcd-7f1f-4382-bf82-b09cde780bdb] Running
	I1202 20:54:58.921541  727677 system_pods.go:126] duration metric: took 986.887188ms to wait for k8s-apps to be running ...
	I1202 20:54:58.921550  727677 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 20:54:58.921604  727677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:54:58.936808  727677 system_svc.go:56] duration metric: took 15.220965ms WaitForService to wait for kubelet
	I1202 20:54:58.936842  727677 kubeadm.go:587] duration metric: took 14.914814409s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:54:58.936868  727677 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:54:58.940483  727677 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 20:54:58.940521  727677 node_conditions.go:123] node cpu capacity is 8
	I1202 20:54:58.940543  727677 node_conditions.go:105] duration metric: took 3.669091ms to run NodePressure ...
	I1202 20:54:58.940560  727677 start.go:242] waiting for startup goroutines ...
	I1202 20:54:58.940570  727677 start.go:247] waiting for cluster config update ...
	I1202 20:54:58.940582  727677 start.go:256] writing updated cluster config ...
	I1202 20:54:58.940940  727677 ssh_runner.go:195] Run: rm -f paused
	I1202 20:54:58.946442  727677 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:54:58.950994  727677 pod_ready.go:83] waiting for pod "coredns-7d764666f9-ghxk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:58.956333  727677 pod_ready.go:94] pod "coredns-7d764666f9-ghxk6" is "Ready"
	I1202 20:54:58.956362  727677 pod_ready.go:86] duration metric: took 5.338212ms for pod "coredns-7d764666f9-ghxk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:58.961022  727677 pod_ready.go:83] waiting for pod "etcd-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:58.967156  727677 pod_ready.go:94] pod "etcd-no-preload-336331" is "Ready"
	I1202 20:54:58.967197  727677 pod_ready.go:86] duration metric: took 6.143693ms for pod "etcd-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:58.970251  727677 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:58.975849  727677 pod_ready.go:94] pod "kube-apiserver-no-preload-336331" is "Ready"
	I1202 20:54:58.975894  727677 pod_ready.go:86] duration metric: took 5.606631ms for pod "kube-apiserver-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:58.979032  727677 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:59.351307  727677 pod_ready.go:94] pod "kube-controller-manager-no-preload-336331" is "Ready"
	I1202 20:54:59.351337  727677 pod_ready.go:86] duration metric: took 372.272976ms for pod "kube-controller-manager-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:59.552225  727677 pod_ready.go:83] waiting for pod "kube-proxy-qc2v9" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:59.951963  727677 pod_ready.go:94] pod "kube-proxy-qc2v9" is "Ready"
	I1202 20:54:59.952012  727677 pod_ready.go:86] duration metric: took 399.754386ms for pod "kube-proxy-qc2v9" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:00.151862  727677 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:00.551517  727677 pod_ready.go:94] pod "kube-scheduler-no-preload-336331" is "Ready"
	I1202 20:55:00.551567  727677 pod_ready.go:86] duration metric: took 399.673435ms for pod "kube-scheduler-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:00.551585  727677 pod_ready.go:40] duration metric: took 1.605104621s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:55:00.623116  727677 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1202 20:55:00.625337  727677 out.go:179] * Done! kubectl is now configured to use "no-preload-336331" cluster and "default" namespace by default
	I1202 20:54:56.429637  744523 kubeadm.go:884] updating cluster {Name:newest-cni-245604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-245604 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:54:56.429813  744523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 20:54:56.429873  744523 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:54:56.470335  744523 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1202 20:54:56.470367  744523 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1202 20:54:56.470443  744523 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:56.470709  744523 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:56.470835  744523 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:56.470944  744523 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:56.471113  744523 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:56.471227  744523 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1202 20:54:56.471312  744523 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:56.471416  744523 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:56.474235  744523 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1202 20:54:56.474674  744523 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:56.474720  744523 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:56.474716  744523 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:56.474788  744523 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:56.475527  744523 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:56.475871  744523 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:56.476514  744523 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:56.627881  744523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:56.635408  744523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:56.645721  744523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:56.656260  744523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:56.665724  744523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:56.674018  744523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1202 20:54:56.686804  744523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:56.690583  744523 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1202 20:54:56.690704  744523 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:56.690760  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:56.707645  744523 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1202 20:54:56.707701  744523 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:56.707771  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:56.729690  744523 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1202 20:54:56.729741  744523 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:56.729790  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:56.730634  744523 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1202 20:54:56.730670  744523 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:56.730712  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:56.748602  744523 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1202 20:54:56.748650  744523 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:56.748713  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:56.779664  744523 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1202 20:54:56.779729  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:56.779748  744523 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1202 20:54:56.779805  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:56.779817  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:56.779663  744523 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1202 20:54:56.779842  744523 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:56.779872  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:56.779878  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:56.779903  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:56.779731  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:56.877780  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:56.893301  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 20:54:56.893403  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:56.893456  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:56.893522  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:56.893577  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:56.893630  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:56.979424  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:56.979467  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:56.979427  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 20:54:56.979522  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:56.979694  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:56.979787  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:56.979870  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:57.063429  744523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1202 20:54:57.063525  744523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1202 20:54:57.063574  744523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1202 20:54:57.063635  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 20:54:57.063715  744523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1202 20:54:57.063773  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 20:54:57.063798  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 20:54:57.063529  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1202 20:54:57.073765  744523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1202 20:54:57.073970  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 20:54:57.073976  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:57.074150  744523 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1202 20:54:57.074177  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1202 20:54:57.074309  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1202 20:54:57.090729  744523 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1202 20:54:57.090765  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1202 20:54:57.090852  744523 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1202 20:54:57.090867  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1202 20:54:57.091043  744523 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1202 20:54:57.091207  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1202 20:54:57.151485  744523 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1202 20:54:57.151520  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1202 20:54:57.151798  744523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1202 20:54:57.151964  744523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1202 20:54:57.152031  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1202 20:54:57.152553  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 20:54:57.254451  744523 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1202 20:54:57.254502  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1202 20:54:57.257229  744523 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1202 20:54:57.257317  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1202 20:54:57.392528  744523 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1202 20:54:57.392642  744523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1202 20:54:57.810758  744523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:57.869494  744523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1202 20:54:57.869554  744523 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 20:54:57.869628  744523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 20:54:57.932920  744523 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1202 20:54:57.932975  744523 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:57.933024  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:59.294687  744523 ssh_runner.go:235] Completed: which crictl: (1.361639017s)
	I1202 20:54:59.294768  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:59.294838  744523 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.425189868s)
	I1202 20:54:59.294869  744523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1202 20:54:59.294918  744523 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 20:54:59.294967  744523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 20:55:00.817466  744523 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.522668777s)
	I1202 20:55:00.817551  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:55:00.817627  744523 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.522632151s)
	I1202 20:55:00.817648  744523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1202 20:55:00.817674  744523 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1202 20:55:00.817704  744523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1202 20:55:00.848635  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:56.870332  736301 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 20:54:56.877419  736301 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1202 20:54:56.877436  736301 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 20:54:56.902275  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 20:54:57.337788  736301 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 20:54:57.337991  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:54:57.338104  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-997805 minikube.k8s.io/updated_at=2025_12_02T20_54_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92 minikube.k8s.io/name=default-k8s-diff-port-997805 minikube.k8s.io/primary=true
	I1202 20:54:57.477817  736301 ops.go:34] apiserver oom_adj: -16
	I1202 20:54:57.477829  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:54:57.978414  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:54:58.478319  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:54:58.980154  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:54:59.478288  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:54:59.978296  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:00.478855  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:00.978336  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:01.478217  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:01.560150  736301 kubeadm.go:1114] duration metric: took 4.222209683s to wait for elevateKubeSystemPrivileges
	I1202 20:55:01.560198  736301 kubeadm.go:403] duration metric: took 16.697560258s to StartCluster
	I1202 20:55:01.560223  736301 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:01.560308  736301 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:01.561505  736301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:01.561778  736301 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:55:01.561831  736301 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:55:01.561928  736301 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:01.561953  736301 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:01.561973  736301 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-997805"
	I1202 20:55:01.561980  736301 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-997805"
	I1202 20:55:01.561813  736301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 20:55:01.562021  736301 config.go:182] Loaded profile config "default-k8s-diff-port-997805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:55:01.562004  736301 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:01.562425  736301 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:01.562664  736301 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:01.564706  736301 out.go:179] * Verifying Kubernetes components...
	I1202 20:55:01.566104  736301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:01.589813  736301 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-997805"
	I1202 20:55:01.589873  736301 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:01.590425  736301 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:01.590987  736301 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:55:01.592179  736301 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:01.592201  736301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:55:01.592270  736301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:01.619646  736301 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:01.619694  736301 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:55:01.619759  736301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:01.627920  736301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:01.654225  736301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:01.682285  736301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 20:55:01.736624  736301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:55:01.766566  736301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:01.788518  736301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:01.900235  736301 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1202 20:55:01.901603  736301 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-997805" to be "Ready" ...
	I1202 20:55:02.127286  736301 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1202 20:55:00.495919  743547 addons.go:530] duration metric: took 3.753750261s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1202 20:55:00.497622  743547 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1202 20:55:00.497666  743547 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1202 20:55:00.991191  743547 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1202 20:55:00.996136  743547 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1202 20:55:00.997346  743547 api_server.go:141] control plane version: v1.28.0
	I1202 20:55:00.997377  743547 api_server.go:131] duration metric: took 506.333183ms to wait for apiserver health ...
	I1202 20:55:00.997390  743547 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:55:01.001606  743547 system_pods.go:59] 8 kube-system pods found
	I1202 20:55:01.001663  743547 system_pods.go:61] "coredns-5dd5756b68-ptzsf" [14b9d2d2-4853-419f-ad27-5d6f4c9c7e2c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:55:01.001678  743547 system_pods.go:61] "etcd-old-k8s-version-992336" [22527607-8153-442e-97cb-93555cbcdd3a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:55:01.001689  743547 system_pods.go:61] "kindnet-jvmsp" [51a76a82-d4d0-4909-a7a7-49ad2e3fd9f0] Running
	I1202 20:55:01.001703  743547 system_pods.go:61] "kube-apiserver-old-k8s-version-992336" [5049999c-2987-49b7-ba74-9d7621b0759a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:55:01.001716  743547 system_pods.go:61] "kube-controller-manager-old-k8s-version-992336" [34f637f6-d1c4-4620-9705-439b4db0805a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:55:01.001727  743547 system_pods.go:61] "kube-proxy-qpzt8" [e7130e4a-3fd7-49ba-b6c6-ea6857c76765] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 20:55:01.001736  743547 system_pods.go:61] "kube-scheduler-old-k8s-version-992336" [c4e33a26-6df9-440c-9eff-9197bcdfd55c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:55:01.001748  743547 system_pods.go:61] "storage-provisioner" [398f9134-7016-4782-9541-255e9925dd8d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:55:01.001759  743547 system_pods.go:74] duration metric: took 4.359896ms to wait for pod list to return data ...
	I1202 20:55:01.001773  743547 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:55:01.004230  743547 default_sa.go:45] found service account: "default"
	I1202 20:55:01.004254  743547 default_sa.go:55] duration metric: took 2.473014ms for default service account to be created ...
	I1202 20:55:01.004265  743547 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 20:55:01.008022  743547 system_pods.go:86] 8 kube-system pods found
	I1202 20:55:01.008062  743547 system_pods.go:89] "coredns-5dd5756b68-ptzsf" [14b9d2d2-4853-419f-ad27-5d6f4c9c7e2c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:55:01.008112  743547 system_pods.go:89] "etcd-old-k8s-version-992336" [22527607-8153-442e-97cb-93555cbcdd3a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:55:01.008124  743547 system_pods.go:89] "kindnet-jvmsp" [51a76a82-d4d0-4909-a7a7-49ad2e3fd9f0] Running
	I1202 20:55:01.008135  743547 system_pods.go:89] "kube-apiserver-old-k8s-version-992336" [5049999c-2987-49b7-ba74-9d7621b0759a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:55:01.008173  743547 system_pods.go:89] "kube-controller-manager-old-k8s-version-992336" [34f637f6-d1c4-4620-9705-439b4db0805a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:55:01.008187  743547 system_pods.go:89] "kube-proxy-qpzt8" [e7130e4a-3fd7-49ba-b6c6-ea6857c76765] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 20:55:01.008197  743547 system_pods.go:89] "kube-scheduler-old-k8s-version-992336" [c4e33a26-6df9-440c-9eff-9197bcdfd55c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:55:01.008206  743547 system_pods.go:89] "storage-provisioner" [398f9134-7016-4782-9541-255e9925dd8d] Running
	I1202 20:55:01.008233  743547 system_pods.go:126] duration metric: took 3.944236ms to wait for k8s-apps to be running ...
	I1202 20:55:01.008249  743547 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 20:55:01.008306  743547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:55:01.025249  743547 system_svc.go:56] duration metric: took 16.988838ms WaitForService to wait for kubelet
	I1202 20:55:01.025289  743547 kubeadm.go:587] duration metric: took 4.283454748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:55:01.025313  743547 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:55:01.029446  743547 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 20:55:01.029479  743547 node_conditions.go:123] node cpu capacity is 8
	I1202 20:55:01.029504  743547 node_conditions.go:105] duration metric: took 4.184149ms to run NodePressure ...
	I1202 20:55:01.029523  743547 start.go:242] waiting for startup goroutines ...
	I1202 20:55:01.029535  743547 start.go:247] waiting for cluster config update ...
	I1202 20:55:01.029549  743547 start.go:256] writing updated cluster config ...
	I1202 20:55:01.029888  743547 ssh_runner.go:195] Run: rm -f paused
	I1202 20:55:01.034901  743547 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:55:01.039910  743547 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-ptzsf" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 20:55:03.046930  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	I1202 20:55:02.295814  744523 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.478083279s)
	I1202 20:55:02.295852  744523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1202 20:55:02.295876  744523 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1202 20:55:02.295882  744523 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.447208868s)
	I1202 20:55:02.295924  744523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1202 20:55:02.295933  744523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1202 20:55:02.296025  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1202 20:55:03.814698  744523 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.518744941s)
	I1202 20:55:03.814738  744523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1202 20:55:03.814764  744523 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 20:55:03.814810  744523 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.518762728s)
	I1202 20:55:03.814865  744523 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1202 20:55:03.814893  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1202 20:55:03.814817  744523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 20:55:04.925056  744523 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.110119383s)
	I1202 20:55:04.925120  744523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1202 20:55:04.925145  744523 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 20:55:04.925195  744523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 20:55:02.128586  736301 addons.go:530] duration metric: took 566.750529ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1202 20:55:02.404897  736301 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-997805" context rescaled to 1 replicas
	W1202 20:55:03.907516  736301 node_ready.go:57] node "default-k8s-diff-port-997805" has "Ready":"False" status (will retry)
	W1202 20:55:06.528176  736301 node_ready.go:57] node "default-k8s-diff-port-997805" has "Ready":"False" status (will retry)
	W1202 20:55:05.546607  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	W1202 20:55:08.053813  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 02 20:54:58 no-preload-336331 crio[770]: time="2025-12-02T20:54:58.221450568Z" level=info msg="Starting container: 3defb36abff31ed5a67200ef9dfdd959a5b6902af2f85b034beda8b33f6132ff" id=6fb7b7b4-8c66-4532-a33d-80030e2b7c76 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:54:58 no-preload-336331 crio[770]: time="2025-12-02T20:54:58.224516464Z" level=info msg="Started container" PID=2807 containerID=3defb36abff31ed5a67200ef9dfdd959a5b6902af2f85b034beda8b33f6132ff description=kube-system/coredns-7d764666f9-ghxk6/coredns id=6fb7b7b4-8c66-4532-a33d-80030e2b7c76 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7cd133ec8b6a160f3bc6108e360cf86a59ba6008b1a4de228bf2b7a48d96adcc
	Dec 02 20:55:01 no-preload-336331 crio[770]: time="2025-12-02T20:55:01.118305771Z" level=info msg="Running pod sandbox: default/busybox/POD" id=f8977f82-b9f1-4e4e-b95c-8d58a3a0f86b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 20:55:01 no-preload-336331 crio[770]: time="2025-12-02T20:55:01.118388704Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:01 no-preload-336331 crio[770]: time="2025-12-02T20:55:01.124330526Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1a6a3378ac09b3435aed1230e064d66b3b67e8767cf7254bddf3764ea4f161aa UID:17098746-a5de-4eb1-afef-faf394ddb509 NetNS:/var/run/netns/a060d8dd-d4b0-41cf-8bac-0b8bda701aa6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000619290}] Aliases:map[]}"
	Dec 02 20:55:01 no-preload-336331 crio[770]: time="2025-12-02T20:55:01.124382486Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 02 20:55:01 no-preload-336331 crio[770]: time="2025-12-02T20:55:01.135471439Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1a6a3378ac09b3435aed1230e064d66b3b67e8767cf7254bddf3764ea4f161aa UID:17098746-a5de-4eb1-afef-faf394ddb509 NetNS:/var/run/netns/a060d8dd-d4b0-41cf-8bac-0b8bda701aa6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000619290}] Aliases:map[]}"
	Dec 02 20:55:01 no-preload-336331 crio[770]: time="2025-12-02T20:55:01.13561257Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 02 20:55:01 no-preload-336331 crio[770]: time="2025-12-02T20:55:01.136455383Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 20:55:01 no-preload-336331 crio[770]: time="2025-12-02T20:55:01.137323273Z" level=info msg="Ran pod sandbox 1a6a3378ac09b3435aed1230e064d66b3b67e8767cf7254bddf3764ea4f161aa with infra container: default/busybox/POD" id=f8977f82-b9f1-4e4e-b95c-8d58a3a0f86b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 20:55:01 no-preload-336331 crio[770]: time="2025-12-02T20:55:01.138800227Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=72e0a1e0-4b34-499d-b8b3-18adc6cf1c2b name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:01 no-preload-336331 crio[770]: time="2025-12-02T20:55:01.138919528Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=72e0a1e0-4b34-499d-b8b3-18adc6cf1c2b name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:01 no-preload-336331 crio[770]: time="2025-12-02T20:55:01.138953548Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=72e0a1e0-4b34-499d-b8b3-18adc6cf1c2b name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:01 no-preload-336331 crio[770]: time="2025-12-02T20:55:01.139793815Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1432c4ed-b945-4cce-9ae0-c985a90696d6 name=/runtime.v1.ImageService/PullImage
	Dec 02 20:55:01 no-preload-336331 crio[770]: time="2025-12-02T20:55:01.141503974Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 02 20:55:03 no-preload-336331 crio[770]: time="2025-12-02T20:55:03.125308936Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=1432c4ed-b945-4cce-9ae0-c985a90696d6 name=/runtime.v1.ImageService/PullImage
	Dec 02 20:55:03 no-preload-336331 crio[770]: time="2025-12-02T20:55:03.125979828Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0bd62338-f0dc-485e-8311-6179b63f1b11 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:03 no-preload-336331 crio[770]: time="2025-12-02T20:55:03.127696419Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bc1bd005-787d-4d8e-84c4-0176e50a44c6 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:03 no-preload-336331 crio[770]: time="2025-12-02T20:55:03.130907043Z" level=info msg="Creating container: default/busybox/busybox" id=0a1e6230-b0ea-4e69-bc44-1acb76a03209 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:55:03 no-preload-336331 crio[770]: time="2025-12-02T20:55:03.131041908Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:03 no-preload-336331 crio[770]: time="2025-12-02T20:55:03.134709032Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:03 no-preload-336331 crio[770]: time="2025-12-02T20:55:03.135161234Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:03 no-preload-336331 crio[770]: time="2025-12-02T20:55:03.197944956Z" level=info msg="Created container bf9938bcf43868ce816016ebeafa869f399a598ce27e4871629e884782a4733c: default/busybox/busybox" id=0a1e6230-b0ea-4e69-bc44-1acb76a03209 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:55:03 no-preload-336331 crio[770]: time="2025-12-02T20:55:03.199054269Z" level=info msg="Starting container: bf9938bcf43868ce816016ebeafa869f399a598ce27e4871629e884782a4733c" id=3ce089da-b1d7-4c53-9341-939d18485fe4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:55:03 no-preload-336331 crio[770]: time="2025-12-02T20:55:03.202155278Z" level=info msg="Started container" PID=2876 containerID=bf9938bcf43868ce816016ebeafa869f399a598ce27e4871629e884782a4733c description=default/busybox/busybox id=3ce089da-b1d7-4c53-9341-939d18485fe4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1a6a3378ac09b3435aed1230e064d66b3b67e8767cf7254bddf3764ea4f161aa
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bf9938bcf4386       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   1a6a3378ac09b       busybox                                     default
	3defb36abff31       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      13 seconds ago      Running             coredns                   0                   7cd133ec8b6a1       coredns-7d764666f9-ghxk6                    kube-system
	f883a129a80eb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   918000f0492e0       storage-provisioner                         kube-system
	9cc260844b755       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    24 seconds ago      Running             kindnet-cni               0                   dc4f8c15db4ac       kindnet-5blk7                               kube-system
	a849e942f1a2b       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                      26 seconds ago      Running             kube-proxy                0                   d2f97e4f07672       kube-proxy-qc2v9                            kube-system
	4feee542adce6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      37 seconds ago      Running             etcd                      0                   ee21bd6b6eba0       etcd-no-preload-336331                      kube-system
	5e7186897159f       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                      37 seconds ago      Running             kube-apiserver            0                   0be3e2c0beca6       kube-apiserver-no-preload-336331            kube-system
	e6f2c59119c96       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                      37 seconds ago      Running             kube-controller-manager   0                   fe874bd3b0b2d       kube-controller-manager-no-preload-336331   kube-system
	62564c2fefa51       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                      37 seconds ago      Running             kube-scheduler            0                   357c530ca36e3       kube-scheduler-no-preload-336331            kube-system
	
	
	==> coredns [3defb36abff31ed5a67200ef9dfdd959a5b6902af2f85b034beda8b33f6132ff] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:44584 - 3572 "HINFO IN 5958804683565611146.6391358302492921837. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024234016s
	
	
	==> describe nodes <==
	Name:               no-preload-336331
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-336331
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=no-preload-336331
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T20_54_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 20:54:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-336331
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:55:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:55:09 +0000   Tue, 02 Dec 2025 20:54:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:55:09 +0000   Tue, 02 Dec 2025 20:54:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:55:09 +0000   Tue, 02 Dec 2025 20:54:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:55:09 +0000   Tue, 02 Dec 2025 20:54:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-336331
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                3a1272e4-255b-4719-83a7-b5faa7d71457
	  Boot ID:                    9dd5456f-e394-4b4b-9458-48d26faf507e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-7d764666f9-ghxk6                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-no-preload-336331                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-5blk7                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-no-preload-336331             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-336331    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-qc2v9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-no-preload-336331             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  28s   node-controller  Node no-preload-336331 event: Registered Node no-preload-336331 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[ +32.254238] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[Dec 2 20:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 03 bd 14 45 8a 08 06
	[  +0.000590] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 96 27 ad 0d 40 04 08 06
	[Dec 2 20:53] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	[  +0.000700] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 e4 ba c0 78 5f 08 06
	[ +10.119645] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000022] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[  +2.447166] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 df 09 53 d6 6e 08 06
	[  +0.000374] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 8d 06 71 0a 5e 08 06
	[Dec 2 20:54] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 12 47 13 50 f6 bc 08 06
	[  +0.001523] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[ +22.123549] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 0d 45 06 42 2a 08 06
	[  +0.000389] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	
	
	==> etcd [4feee542adce6575e5560d25c64cc9e8c4725726e6418e19aee09a38fb608497] <==
	{"level":"info","ts":"2025-12-02T20:54:35.917375Z","caller":"traceutil/trace.go:172","msg":"trace[1739014816] transaction","detail":"{read_only:false; response_revision:29; number_of_response:1; }","duration":"157.709681ms","start":"2025-12-02T20:54:35.759654Z","end":"2025-12-02T20:54:35.917364Z","steps":["trace[1739014816] 'process raft request'  (duration: 157.498518ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T20:54:35.917387Z","caller":"traceutil/trace.go:172","msg":"trace[1923564702] transaction","detail":"{read_only:false; response_revision:25; number_of_response:1; }","duration":"158.432669ms","start":"2025-12-02T20:54:35.758936Z","end":"2025-12-02T20:54:35.917368Z","steps":["trace[1923564702] 'process raft request'  (duration: 158.006208ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T20:54:35.917453Z","caller":"traceutil/trace.go:172","msg":"trace[464572454] transaction","detail":"{read_only:false; response_revision:30; number_of_response:1; }","duration":"157.395761ms","start":"2025-12-02T20:54:35.760048Z","end":"2025-12-02T20:54:35.917443Z","steps":["trace[464572454] 'process raft request'  (duration: 157.126528ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T20:54:35.917603Z","caller":"traceutil/trace.go:172","msg":"trace[1607672218] transaction","detail":"{read_only:false; response_revision:26; number_of_response:1; }","duration":"158.309302ms","start":"2025-12-02T20:54:35.759278Z","end":"2025-12-02T20:54:35.917587Z","steps":["trace[1607672218] 'process raft request'  (duration: 157.701269ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T20:54:35.919169Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"146.470163ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-02T20:54:35.920642Z","caller":"traceutil/trace.go:172","msg":"trace[962229196] range","detail":"{range_begin:/registry/certificatesigningrequests; range_end:; response_count:0; response_revision:31; }","duration":"147.951131ms","start":"2025-12-02T20:54:35.772674Z","end":"2025-12-02T20:54:35.920626Z","steps":["trace[962229196] 'agreement among raft nodes before linearized reading'  (duration: 146.434995ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T20:54:35.919509Z","caller":"traceutil/trace.go:172","msg":"trace[676201237] transaction","detail":"{read_only:false; response_revision:32; number_of_response:1; }","duration":"148.222342ms","start":"2025-12-02T20:54:35.771262Z","end":"2025-12-02T20:54:35.919484Z","steps":["trace[676201237] 'process raft request'  (duration: 148.145194ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T20:54:36.222553Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.159234ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638357046585645527 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1.storage.k8s.io\" mod_revision:0 > success:<request_put:<key:\"/registry/apiregistration.k8s.io/apiservices/v1.storage.k8s.io\" value_size:880 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-12-02T20:54:36.222820Z","caller":"traceutil/trace.go:172","msg":"trace[622145255] transaction","detail":"{read_only:false; response_revision:41; number_of_response:1; }","duration":"283.135242ms","start":"2025-12-02T20:54:35.939671Z","end":"2025-12-02T20:54:36.222806Z","steps":["trace[622145255] 'process raft request'  (duration: 283.066179ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T20:54:36.223049Z","caller":"traceutil/trace.go:172","msg":"trace[1209290900] transaction","detail":"{read_only:false; response_revision:40; number_of_response:1; }","duration":"298.455003ms","start":"2025-12-02T20:54:35.924579Z","end":"2025-12-02T20:54:36.223034Z","steps":["trace[1209290900] 'process raft request'  (duration: 137.740317ms)","trace[1209290900] 'compare'  (duration: 160.035875ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T20:54:36.354028Z","caller":"traceutil/trace.go:172","msg":"trace[537850306] linearizableReadLoop","detail":"{readStateIndex:45; appliedIndex:45; }","duration":"107.631557ms","start":"2025-12-02T20:54:36.246358Z","end":"2025-12-02T20:54:36.353990Z","steps":["trace[537850306] 'read index received'  (duration: 107.622022ms)","trace[537850306] 'applied index is now lower than readState.Index'  (duration: 8.132µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T20:54:36.374750Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.375781ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-02T20:54:36.374813Z","caller":"traceutil/trace.go:172","msg":"trace[1246363322] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:0; response_revision:41; }","duration":"128.455394ms","start":"2025-12-02T20:54:36.246346Z","end":"2025-12-02T20:54:36.374801Z","steps":["trace[1246363322] 'agreement among raft nodes before linearized reading'  (duration: 107.757203ms)","trace[1246363322] 'range keys from in-memory index tree'  (duration: 20.580873ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T20:54:36.374939Z","caller":"traceutil/trace.go:172","msg":"trace[1976263515] transaction","detail":"{read_only:false; response_revision:42; number_of_response:1; }","duration":"148.319047ms","start":"2025-12-02T20:54:36.226599Z","end":"2025-12-02T20:54:36.374918Z","steps":["trace[1976263515] 'process raft request'  (duration: 127.434813ms)","trace[1976263515] 'compare'  (duration: 20.680997ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T20:54:36.374899Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.097563ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-02T20:54:36.375086Z","caller":"traceutil/trace.go:172","msg":"trace[143575981] range","detail":"{range_begin:/registry/clusterroles; range_end:; response_count:0; response_revision:42; }","duration":"128.294593ms","start":"2025-12-02T20:54:36.246784Z","end":"2025-12-02T20:54:36.375079Z","steps":["trace[143575981] 'agreement among raft nodes before linearized reading'  (duration: 128.058203ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T20:54:36.499910Z","caller":"traceutil/trace.go:172","msg":"trace[286311822] linearizableReadLoop","detail":"{readStateIndex:46; appliedIndex:46; }","duration":"119.910755ms","start":"2025-12-02T20:54:36.379974Z","end":"2025-12-02T20:54:36.499885Z","steps":["trace[286311822] 'read index received'  (duration: 119.902417ms)","trace[286311822] 'applied index is now lower than readState.Index'  (duration: 6.997µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T20:54:36.511570Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.577674ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-admin\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-02T20:54:36.511628Z","caller":"traceutil/trace.go:172","msg":"trace[702541432] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-admin; range_end:; response_count:0; response_revision:42; }","duration":"131.653197ms","start":"2025-12-02T20:54:36.379964Z","end":"2025-12-02T20:54:36.511617Z","steps":["trace[702541432] 'agreement among raft nodes before linearized reading'  (duration: 120.020233ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T20:54:36.511799Z","caller":"traceutil/trace.go:172","msg":"trace[1489571477] transaction","detail":"{read_only:false; response_revision:44; number_of_response:1; }","duration":"133.616413ms","start":"2025-12-02T20:54:36.378168Z","end":"2025-12-02T20:54:36.511784Z","steps":["trace[1489571477] 'process raft request'  (duration: 133.568281ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T20:54:36.511883Z","caller":"traceutil/trace.go:172","msg":"trace[1629496006] transaction","detail":"{read_only:false; response_revision:43; number_of_response:1; }","duration":"134.080155ms","start":"2025-12-02T20:54:36.377766Z","end":"2025-12-02T20:54:36.511846Z","steps":["trace[1629496006] 'process raft request'  (duration: 122.246787ms)","trace[1629496006] 'compare'  (duration: 11.53176ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T20:54:36.914619Z","caller":"traceutil/trace.go:172","msg":"trace[922894813] transaction","detail":"{read_only:false; response_revision:66; number_of_response:1; }","duration":"128.27476ms","start":"2025-12-02T20:54:36.786328Z","end":"2025-12-02T20:54:36.914603Z","steps":["trace[922894813] 'process raft request'  (duration: 123.599878ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T20:54:36.916335Z","caller":"traceutil/trace.go:172","msg":"trace[491314140] transaction","detail":"{read_only:false; response_revision:68; number_of_response:1; }","duration":"128.524082ms","start":"2025-12-02T20:54:36.787783Z","end":"2025-12-02T20:54:36.916307Z","steps":["trace[491314140] 'process raft request'  (duration: 128.475544ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T20:54:36.916461Z","caller":"traceutil/trace.go:172","msg":"trace[805312645] transaction","detail":"{read_only:false; response_revision:67; number_of_response:1; }","duration":"130.044493ms","start":"2025-12-02T20:54:36.786391Z","end":"2025-12-02T20:54:36.916435Z","steps":["trace[805312645] 'process raft request'  (duration: 129.77498ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T20:54:57.764197Z","caller":"traceutil/trace.go:172","msg":"trace[1383284969] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"153.809145ms","start":"2025-12-02T20:54:57.610362Z","end":"2025-12-02T20:54:57.764172Z","steps":["trace[1383284969] 'process raft request'  (duration: 153.550402ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:55:11 up  2:37,  0 user,  load average: 5.71, 4.01, 2.56
	Linux no-preload-336331 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9cc260844b7552d9e4617f9af0a74fb7578784f17421c0d2aad27f7b9e62e1b9] <==
	I1202 20:54:47.321565       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 20:54:47.321827       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1202 20:54:47.321961       1 main.go:148] setting mtu 1500 for CNI 
	I1202 20:54:47.321978       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 20:54:47.321998       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T20:54:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 20:54:47.619910       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 20:54:47.619986       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 20:54:47.624318       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 20:54:47.624464       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 20:54:47.925105       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 20:54:47.925133       1 metrics.go:72] Registering metrics
	I1202 20:54:47.925228       1 controller.go:711] "Syncing nftables rules"
	I1202 20:54:57.621235       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1202 20:54:57.621287       1 main.go:301] handling current node
	I1202 20:55:07.620706       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1202 20:55:07.620744       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5e7186897159f4104134e6669726e0cfd2e2a59d44ee2bf5c12f78a46638b3d4] <==
	I1202 20:54:35.447528       1 controller.go:667] quota admission added evaluator for: namespaces
	E1202 20:54:35.468361       1 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E1202 20:54:35.469477       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1202 20:54:35.495549       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1202 20:54:35.495629       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:54:35.750790       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:54:35.759770       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 20:54:36.512811       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1202 20:54:36.580395       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1202 20:54:36.580422       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1202 20:54:37.679308       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 20:54:37.753177       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 20:54:37.856417       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1202 20:54:37.866275       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1202 20:54:37.867954       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 20:54:37.873782       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 20:54:38.292480       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 20:54:38.922992       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 20:54:38.935377       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1202 20:54:38.943853       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 20:54:43.746038       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:54:43.751114       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:54:43.948418       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 20:54:44.297411       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1202 20:55:09.907355       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:50630: use of closed network connection
	
	
	==> kube-controller-manager [e6f2c59119c965de76267e66c98ee9bf9d9eeb5c3083d405c4fc9c5fae9f2e7a] <==
	I1202 20:54:43.111342       1 shared_informer.go:377] "Caches are synced"
	I1202 20:54:43.111690       1 shared_informer.go:377] "Caches are synced"
	I1202 20:54:43.111842       1 shared_informer.go:377] "Caches are synced"
	I1202 20:54:43.111989       1 shared_informer.go:377] "Caches are synced"
	I1202 20:54:43.112133       1 shared_informer.go:377] "Caches are synced"
	I1202 20:54:43.112291       1 shared_informer.go:377] "Caches are synced"
	I1202 20:54:43.112571       1 shared_informer.go:377] "Caches are synced"
	I1202 20:54:43.113139       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-336331"
	I1202 20:54:43.113240       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1202 20:54:43.115104       1 shared_informer.go:377] "Caches are synced"
	I1202 20:54:43.115601       1 shared_informer.go:377] "Caches are synced"
	I1202 20:54:43.117520       1 shared_informer.go:377] "Caches are synced"
	I1202 20:54:43.117686       1 shared_informer.go:377] "Caches are synced"
	I1202 20:54:43.117306       1 shared_informer.go:377] "Caches are synced"
	I1202 20:54:43.117368       1 shared_informer.go:377] "Caches are synced"
	I1202 20:54:43.117186       1 shared_informer.go:377] "Caches are synced"
	I1202 20:54:43.121640       1 shared_informer.go:377] "Caches are synced"
	I1202 20:54:43.121730       1 shared_informer.go:377] "Caches are synced"
	I1202 20:54:43.122877       1 shared_informer.go:377] "Caches are synced"
	I1202 20:54:43.124931       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-336331" podCIDRs=["10.244.0.0/24"]
	I1202 20:54:43.206744       1 shared_informer.go:377] "Caches are synced"
	I1202 20:54:43.206781       1 shared_informer.go:377] "Caches are synced"
	I1202 20:54:43.206799       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1202 20:54:43.206804       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1202 20:54:58.115776       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [a849e942f1a2b31fcd40701bbac79f5210615b3adb3cc1ba1da9f65499b104af] <==
	I1202 20:54:44.768158       1 server_linux.go:53] "Using iptables proxy"
	I1202 20:54:44.825792       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 20:54:44.926831       1 shared_informer.go:377] "Caches are synced"
	I1202 20:54:44.926882       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1202 20:54:44.927013       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 20:54:44.948381       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 20:54:44.948459       1 server_linux.go:136] "Using iptables Proxier"
	I1202 20:54:44.954765       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 20:54:44.955164       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 20:54:44.955190       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:54:44.956550       1 config.go:106] "Starting endpoint slice config controller"
	I1202 20:54:44.956595       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 20:54:44.956600       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 20:54:44.956600       1 config.go:200] "Starting service config controller"
	I1202 20:54:44.956613       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 20:54:44.956618       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 20:54:44.956636       1 config.go:309] "Starting node config controller"
	I1202 20:54:44.956641       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 20:54:45.057783       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 20:54:45.057815       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 20:54:45.057839       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 20:54:45.057776       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [62564c2fefa5138b5df9a66e8dbd765178c580241250076e0b9637ec3d281598] <==
	E1202 20:54:36.472613       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 20:54:36.474053       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1202 20:54:36.486664       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1202 20:54:36.487771       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1202 20:54:36.536686       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1202 20:54:36.537750       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1202 20:54:36.568731       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1202 20:54:36.570062       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1202 20:54:36.593477       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1202 20:54:36.594595       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1202 20:54:36.604232       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1202 20:54:36.605353       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1202 20:54:36.649183       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 20:54:36.650337       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1202 20:54:36.731268       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1202 20:54:36.732906       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1202 20:54:36.832926       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1202 20:54:36.834266       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1202 20:54:36.850489       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 20:54:36.851579       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1202 20:54:36.900830       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1202 20:54:36.901984       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1202 20:54:36.909648       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1202 20:54:36.911024       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	I1202 20:54:39.614254       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 02 20:54:44 no-preload-336331 kubelet[2212]: I1202 20:54:44.405416    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91426b3b-e557-4959-91b3-cb5e256351ac-lib-modules\") pod \"kube-proxy-qc2v9\" (UID: \"91426b3b-e557-4959-91b3-cb5e256351ac\") " pod="kube-system/kube-proxy-qc2v9"
	Dec 02 20:54:44 no-preload-336331 kubelet[2212]: I1202 20:54:44.405448    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8fc0ac0d-1c00-4916-99fd-a1b4e6eec75e-cni-cfg\") pod \"kindnet-5blk7\" (UID: \"8fc0ac0d-1c00-4916-99fd-a1b4e6eec75e\") " pod="kube-system/kindnet-5blk7"
	Dec 02 20:54:44 no-preload-336331 kubelet[2212]: I1202 20:54:44.405470    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8fc0ac0d-1c00-4916-99fd-a1b4e6eec75e-xtables-lock\") pod \"kindnet-5blk7\" (UID: \"8fc0ac0d-1c00-4916-99fd-a1b4e6eec75e\") " pod="kube-system/kindnet-5blk7"
	Dec 02 20:54:44 no-preload-336331 kubelet[2212]: I1202 20:54:44.405544    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8fc0ac0d-1c00-4916-99fd-a1b4e6eec75e-lib-modules\") pod \"kindnet-5blk7\" (UID: \"8fc0ac0d-1c00-4916-99fd-a1b4e6eec75e\") " pod="kube-system/kindnet-5blk7"
	Dec 02 20:54:44 no-preload-336331 kubelet[2212]: I1202 20:54:44.405570    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttgjx\" (UniqueName: \"kubernetes.io/projected/91426b3b-e557-4959-91b3-cb5e256351ac-kube-api-access-ttgjx\") pod \"kube-proxy-qc2v9\" (UID: \"91426b3b-e557-4959-91b3-cb5e256351ac\") " pod="kube-system/kube-proxy-qc2v9"
	Dec 02 20:54:44 no-preload-336331 kubelet[2212]: I1202 20:54:44.405733    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91426b3b-e557-4959-91b3-cb5e256351ac-xtables-lock\") pod \"kube-proxy-qc2v9\" (UID: \"91426b3b-e557-4959-91b3-cb5e256351ac\") " pod="kube-system/kube-proxy-qc2v9"
	Dec 02 20:54:44 no-preload-336331 kubelet[2212]: I1202 20:54:44.405780    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/91426b3b-e557-4959-91b3-cb5e256351ac-kube-proxy\") pod \"kube-proxy-qc2v9\" (UID: \"91426b3b-e557-4959-91b3-cb5e256351ac\") " pod="kube-system/kube-proxy-qc2v9"
	Dec 02 20:54:44 no-preload-336331 kubelet[2212]: I1202 20:54:44.838829    2212 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-qc2v9" podStartSLOduration=0.838808493 podStartE2EDuration="838.808493ms" podCreationTimestamp="2025-12-02 20:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:54:44.838555608 +0000 UTC m=+6.148869677" watchObservedRunningTime="2025-12-02 20:54:44.838808493 +0000 UTC m=+6.149122560"
	Dec 02 20:54:47 no-preload-336331 kubelet[2212]: E1202 20:54:47.592428    2212 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-336331" containerName="kube-controller-manager"
	Dec 02 20:54:50 no-preload-336331 kubelet[2212]: E1202 20:54:50.496281    2212 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-336331" containerName="kube-apiserver"
	Dec 02 20:54:50 no-preload-336331 kubelet[2212]: I1202 20:54:50.509228    2212 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-5blk7" podStartSLOduration=4.030185743 podStartE2EDuration="6.509202792s" podCreationTimestamp="2025-12-02 20:54:44 +0000 UTC" firstStartedPulling="2025-12-02 20:54:44.649525232 +0000 UTC m=+5.959839295" lastFinishedPulling="2025-12-02 20:54:47.128542296 +0000 UTC m=+8.438856344" observedRunningTime="2025-12-02 20:54:47.849430609 +0000 UTC m=+9.159744676" watchObservedRunningTime="2025-12-02 20:54:50.509202792 +0000 UTC m=+11.819516862"
	Dec 02 20:54:51 no-preload-336331 kubelet[2212]: E1202 20:54:51.440047    2212 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-336331" containerName="kube-scheduler"
	Dec 02 20:54:52 no-preload-336331 kubelet[2212]: E1202 20:54:52.387823    2212 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-336331" containerName="etcd"
	Dec 02 20:54:57 no-preload-336331 kubelet[2212]: E1202 20:54:57.599199    2212 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-336331" containerName="kube-controller-manager"
	Dec 02 20:54:57 no-preload-336331 kubelet[2212]: I1202 20:54:57.811559    2212 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 02 20:54:58 no-preload-336331 kubelet[2212]: I1202 20:54:58.000324    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e3c38dcd-7f1f-4382-bf82-b09cde780bdb-tmp\") pod \"storage-provisioner\" (UID: \"e3c38dcd-7f1f-4382-bf82-b09cde780bdb\") " pod="kube-system/storage-provisioner"
	Dec 02 20:54:58 no-preload-336331 kubelet[2212]: I1202 20:54:58.000416    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtrhf\" (UniqueName: \"kubernetes.io/projected/e3c38dcd-7f1f-4382-bf82-b09cde780bdb-kube-api-access-vtrhf\") pod \"storage-provisioner\" (UID: \"e3c38dcd-7f1f-4382-bf82-b09cde780bdb\") " pod="kube-system/storage-provisioner"
	Dec 02 20:54:58 no-preload-336331 kubelet[2212]: I1202 20:54:58.000666    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1696ea67-a1db-437c-bada-07c12d4e9fc8-config-volume\") pod \"coredns-7d764666f9-ghxk6\" (UID: \"1696ea67-a1db-437c-bada-07c12d4e9fc8\") " pod="kube-system/coredns-7d764666f9-ghxk6"
	Dec 02 20:54:58 no-preload-336331 kubelet[2212]: I1202 20:54:58.001011    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gkgg\" (UniqueName: \"kubernetes.io/projected/1696ea67-a1db-437c-bada-07c12d4e9fc8-kube-api-access-9gkgg\") pod \"coredns-7d764666f9-ghxk6\" (UID: \"1696ea67-a1db-437c-bada-07c12d4e9fc8\") " pod="kube-system/coredns-7d764666f9-ghxk6"
	Dec 02 20:54:58 no-preload-336331 kubelet[2212]: E1202 20:54:58.867498    2212 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ghxk6" containerName="coredns"
	Dec 02 20:54:58 no-preload-336331 kubelet[2212]: I1202 20:54:58.879001    2212 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.878980899 podStartE2EDuration="14.878980899s" podCreationTimestamp="2025-12-02 20:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:54:58.878789113 +0000 UTC m=+20.189103180" watchObservedRunningTime="2025-12-02 20:54:58.878980899 +0000 UTC m=+20.189294967"
	Dec 02 20:54:58 no-preload-336331 kubelet[2212]: I1202 20:54:58.895751    2212 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-ghxk6" podStartSLOduration=14.895703475 podStartE2EDuration="14.895703475s" podCreationTimestamp="2025-12-02 20:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:54:58.894893125 +0000 UTC m=+20.205207194" watchObservedRunningTime="2025-12-02 20:54:58.895703475 +0000 UTC m=+20.206017544"
	Dec 02 20:54:59 no-preload-336331 kubelet[2212]: E1202 20:54:59.870134    2212 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ghxk6" containerName="coredns"
	Dec 02 20:55:00 no-preload-336331 kubelet[2212]: E1202 20:55:00.873282    2212 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ghxk6" containerName="coredns"
	Dec 02 20:55:00 no-preload-336331 kubelet[2212]: I1202 20:55:00.922413    2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc4jd\" (UniqueName: \"kubernetes.io/projected/17098746-a5de-4eb1-afef-faf394ddb509-kube-api-access-hc4jd\") pod \"busybox\" (UID: \"17098746-a5de-4eb1-afef-faf394ddb509\") " pod="default/busybox"
	
	
	==> storage-provisioner [f883a129a80eb20b6d2ad039b10a677df2069491d019e2a211618e4c80eb6390] <==
	I1202 20:54:58.232421       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 20:54:58.245888       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 20:54:58.245977       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1202 20:54:58.252986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:54:58.260038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 20:54:58.260272       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 20:54:58.260480       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-336331_97d4ef2a-47c8-4cc3-8d87-0173446e50ef!
	I1202 20:54:58.260516       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3ea12a83-8249-476a-aff4-76a34b961543", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-336331_97d4ef2a-47c8-4cc3-8d87-0173446e50ef became leader
	W1202 20:54:58.263771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:54:58.269576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 20:54:58.361446       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-336331_97d4ef2a-47c8-4cc3-8d87-0173446e50ef!
	W1202 20:55:00.275310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:55:00.282288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:55:02.286253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:55:02.293619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:55:04.297150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:55:04.301544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:55:06.307858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:55:06.312456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:55:08.316562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:55:08.321448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:55:10.325108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:55:10.330031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-336331 -n no-preload-336331
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-336331 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-245604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-245604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (343.650767ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:55:23Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-245604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-245604
helpers_test.go:243: (dbg) docker inspect newest-cni-245604:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ae60842ee29fe46d926c1b83a674aa62b502c7f9b5da358f869a46142c7f9e7c",
	        "Created": "2025-12-02T20:54:52.492393664Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 745062,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T20:54:52.539145254Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/ae60842ee29fe46d926c1b83a674aa62b502c7f9b5da358f869a46142c7f9e7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ae60842ee29fe46d926c1b83a674aa62b502c7f9b5da358f869a46142c7f9e7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/ae60842ee29fe46d926c1b83a674aa62b502c7f9b5da358f869a46142c7f9e7c/hosts",
	        "LogPath": "/var/lib/docker/containers/ae60842ee29fe46d926c1b83a674aa62b502c7f9b5da358f869a46142c7f9e7c/ae60842ee29fe46d926c1b83a674aa62b502c7f9b5da358f869a46142c7f9e7c-json.log",
	        "Name": "/newest-cni-245604",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-245604:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-245604",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ae60842ee29fe46d926c1b83a674aa62b502c7f9b5da358f869a46142c7f9e7c",
	                "LowerDir": "/var/lib/docker/overlay2/cadb92bade23480fadfbab75eef8dd705d24c3d8c95f9fa3a23707e903f6c6b9-init/diff:/var/lib/docker/overlay2/49bb44e987885c48600d5ae7ad3c81cad82a00f38070c0460882c5746d9fae59/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cadb92bade23480fadfbab75eef8dd705d24c3d8c95f9fa3a23707e903f6c6b9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cadb92bade23480fadfbab75eef8dd705d24c3d8c95f9fa3a23707e903f6c6b9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cadb92bade23480fadfbab75eef8dd705d24c3d8c95f9fa3a23707e903f6c6b9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-245604",
	                "Source": "/var/lib/docker/volumes/newest-cni-245604/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-245604",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-245604",
	                "name.minikube.sigs.k8s.io": "newest-cni-245604",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "4b6201893975319053fb88024110b6c5f8fbf4b2741b659d72556a9fa9c010da",
	            "SandboxKey": "/var/run/docker/netns/4b6201893975",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33493"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33494"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33497"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33495"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33496"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-245604": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "417e9d972863c61faff7f9557de77252152a1c936456e0c9e3a58022e688fea1",
	                    "EndpointID": "28a35a052c724fefd2ecbfaaafa23c5ae253dd3a0a8c1c871bf1f06141bb4f07",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "9a:3d:e3:1a:9a:cd",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-245604",
	                        "ae60842ee29f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-245604 -n newest-cni-245604
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-245604 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-245604 logs -n 25: (1.191116061s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────
────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────
────────┤
	│ ssh     │ -p bridge-775392 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                                │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ ssh     │ -p bridge-775392 sudo systemctl cat docker --no-pager                                                                                                                                                                                                │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                    │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ ssh     │ -p bridge-775392 sudo docker system info                                                                                                                                                                                                             │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ ssh     │ -p bridge-775392 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                            │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ ssh     │ -p bridge-775392 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                            │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                       │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ ssh     │ -p bridge-775392 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                 │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo cri-dockerd --version                                                                                                                                                                                                          │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                            │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ ssh     │ -p bridge-775392 sudo systemctl cat containerd --no-pager                                                                                                                                                                                            │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                     │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo cat /etc/containerd/config.toml                                                                                                                                                                                                │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo containerd config dump                                                                                                                                                                                                         │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                  │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo systemctl cat crio --no-pager                                                                                                                                                                                                  │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                        │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo crio config                                                                                                                                                                                                                    │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-992336 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-992336 │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ delete  │ -p bridge-775392                                                                                                                                                                                                                                     │ bridge-775392          │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ start   │ -p old-k8s-version-992336 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-992336 │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ start   │ -p newest-cni-245604 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-245604      │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable metrics-server -p no-preload-336331 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-336331      │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ stop    │ -p no-preload-336331 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-336331      │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-245604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-245604      │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────
────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:54:51
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:54:51.248686  744523 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:54:51.248931  744523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:54:51.248939  744523 out.go:374] Setting ErrFile to fd 2...
	I1202 20:54:51.248944  744523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:54:51.249199  744523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:54:51.249701  744523 out.go:368] Setting JSON to false
	I1202 20:54:51.250904  744523 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9435,"bootTime":1764699456,"procs":364,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:54:51.250979  744523 start.go:143] virtualization: kvm guest
	I1202 20:54:51.252790  744523 out.go:179] * [newest-cni-245604] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 20:54:51.253899  744523 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:54:51.253977  744523 notify.go:221] Checking for updates...
	I1202 20:54:51.255724  744523 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:54:51.257813  744523 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:54:51.259113  744523 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 20:54:51.260359  744523 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:54:51.261736  744523 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:54:51.263851  744523 config.go:182] Loaded profile config "default-k8s-diff-port-997805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:54:51.264036  744523 config.go:182] Loaded profile config "no-preload-336331": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:54:51.264195  744523 config.go:182] Loaded profile config "old-k8s-version-992336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1202 20:54:51.264328  744523 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:54:51.291120  744523 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 20:54:51.291259  744523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:54:51.351993  744523 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 20:54:51.3414757 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:54:51.352148  744523 docker.go:319] overlay module found
	I1202 20:54:51.354258  744523 out.go:179] * Using the docker driver based on user configuration
	I1202 20:54:51.355593  744523 start.go:309] selected driver: docker
	I1202 20:54:51.355614  744523 start.go:927] validating driver "docker" against <nil>
	I1202 20:54:51.355627  744523 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:54:51.356356  744523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:54:51.426417  744523 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 20:54:51.413315172 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:54:51.426660  744523 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1202 20:54:51.426715  744523 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1202 20:54:51.427099  744523 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 20:54:51.430750  744523 out.go:179] * Using Docker driver with root privileges
	I1202 20:54:51.432181  744523 cni.go:84] Creating CNI manager for ""
	I1202 20:54:51.432273  744523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:54:51.432289  744523 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 20:54:51.432396  744523 start.go:353] cluster config:
	{Name:newest-cni-245604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-245604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:54:51.433991  744523 out.go:179] * Starting "newest-cni-245604" primary control-plane node in "newest-cni-245604" cluster
	I1202 20:54:51.435712  744523 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 20:54:51.437418  744523 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 20:54:51.438923  744523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 20:54:51.439029  744523 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 20:54:51.471094  744523 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 20:54:51.471120  744523 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 20:54:51.534888  744523 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1202 20:54:51.754467  744523 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1202 20:54:51.754662  744523 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/config.json ...
	I1202 20:54:51.754711  744523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/config.json: {Name:mkdd178ed72e91eb36b68a6cb223fd44f9a5dcff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:54:51.754782  744523 cache.go:107] acquiring lock: {Name:mkf03491d08646dc0a2273e6c20a49756d4e1761 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.754824  744523 cache.go:107] acquiring lock: {Name:mk4453b54b86b3689d0543734fa82feede2f4f33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.754826  744523 cache.go:107] acquiring lock: {Name:mk8c99492104b5abf1d260aa0432b08c059c9259 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.754883  744523 cache.go:107] acquiring lock: {Name:mk5eb5d2ea906db41607942a8f8093a266b381cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.754913  744523 cache.go:107] acquiring lock: {Name:mkda13332b8e3f844bd42c29502a9c7671b1ad3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.754935  744523 cache.go:243] Successfully downloaded all kic artifacts
	I1202 20:54:51.754899  744523 cache.go:107] acquiring lock: {Name:mk01b60fbf34196e8795139c06a53061b5bbef1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.754947  744523 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 20:54:51.754967  744523 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 20:54:51.754900  744523 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 20:54:51.754974  744523 start.go:360] acquireMachinesLock for newest-cni-245604: {Name:mk8ec8505d24ccef2b962d884ea41e40436fd883 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.754980  744523 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 20:54:51.754981  744523 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 69.251µs
	I1202 20:54:51.754990  744523 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 242.138µs
	I1202 20:54:51.754996  744523 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 20:54:51.755004  744523 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 20:54:51.755001  744523 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 20:54:51.754963  744523 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 141.678µs
	I1202 20:54:51.755018  744523 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 20:54:51.754982  744523 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 158.842µs
	I1202 20:54:51.755022  744523 start.go:364] duration metric: took 35.783µs to acquireMachinesLock for "newest-cni-245604"
	I1202 20:54:51.754970  744523 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 20:54:51.755028  744523 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 147.229µs
	I1202 20:54:51.755038  744523 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 20:54:51.755036  744523 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 141.032µs
	I1202 20:54:51.755051  744523 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 20:54:51.754782  744523 cache.go:107] acquiring lock: {Name:mk911a7415c1db6121866a16aaa8d547d8fc27e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.755025  744523 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 20:54:51.754791  744523 cache.go:107] acquiring lock: {Name:mk1ce3ec6c8a0a78faf5ccb0bb487dc5a506ffff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.755107  744523 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 20:54:51.755130  744523 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 351.859µs
	I1202 20:54:51.755151  744523 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 20:54:51.755051  744523 start.go:93] Provisioning new machine with config: &{Name:newest-cni-245604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-245604 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:54:51.755192  744523 start.go:125] createHost starting for "" (driver="docker")
	I1202 20:54:51.755295  744523 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1202 20:54:51.755311  744523 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 531.706µs
	I1202 20:54:51.755333  744523 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 20:54:51.755341  744523 cache.go:87] Successfully saved all images to host disk.
	I1202 20:54:49.807275  736301 out.go:252]   - Booting up control plane ...
	I1202 20:54:49.807399  736301 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 20:54:49.807498  736301 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 20:54:49.807593  736301 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 20:54:49.820733  736301 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 20:54:49.820866  736301 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 20:54:49.828232  736301 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 20:54:49.829367  736301 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 20:54:49.829419  736301 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 20:54:49.939090  736301 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 20:54:49.939273  736301 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 20:54:50.939981  736301 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00098678s
	I1202 20:54:50.943942  736301 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 20:54:50.944097  736301 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1202 20:54:50.944200  736301 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 20:54:50.944356  736301 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1202 20:54:48.889199  727677 node_ready.go:57] node "no-preload-336331" has "Ready":"False" status (will retry)
	W1202 20:54:50.889639  727677 node_ready.go:57] node "no-preload-336331" has "Ready":"False" status (will retry)
	W1202 20:54:52.890301  727677 node_ready.go:57] node "no-preload-336331" has "Ready":"False" status (will retry)
	I1202 20:54:48.917338  743547 out.go:252] * Restarting existing docker container for "old-k8s-version-992336" ...
	I1202 20:54:48.917418  743547 cli_runner.go:164] Run: docker start old-k8s-version-992336
	I1202 20:54:49.233874  743547 cli_runner.go:164] Run: docker container inspect old-k8s-version-992336 --format={{.State.Status}}
	I1202 20:54:49.254208  743547 kic.go:430] container "old-k8s-version-992336" state is running.
	I1202 20:54:49.254576  743547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-992336
	I1202 20:54:49.276197  743547 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336/config.json ...
	I1202 20:54:49.276474  743547 machine.go:94] provisionDockerMachine start ...
	I1202 20:54:49.276556  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:49.295873  743547 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:49.296238  743547 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1202 20:54:49.296255  743547 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:54:49.296917  743547 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36762->127.0.0.1:33488: read: connection reset by peer
	I1202 20:54:52.482289  743547 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-992336
	
	I1202 20:54:52.482326  743547 ubuntu.go:182] provisioning hostname "old-k8s-version-992336"
	I1202 20:54:52.482403  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:52.508620  743547 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:52.509026  743547 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1202 20:54:52.509045  743547 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-992336 && echo "old-k8s-version-992336" | sudo tee /etc/hostname
	I1202 20:54:52.680116  743547 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-992336
	
	I1202 20:54:52.680210  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:52.706295  743547 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:52.706638  743547 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1202 20:54:52.706666  743547 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-992336' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-992336/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-992336' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:54:52.868164  743547 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:54:52.868203  743547 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-407427/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-407427/.minikube}
	I1202 20:54:52.868253  743547 ubuntu.go:190] setting up certificates
	I1202 20:54:52.868266  743547 provision.go:84] configureAuth start
	I1202 20:54:52.868351  743547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-992336
	I1202 20:54:52.896120  743547 provision.go:143] copyHostCerts
	I1202 20:54:52.896189  743547 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem, removing ...
	I1202 20:54:52.896201  743547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem
	I1202 20:54:52.896288  743547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem (1082 bytes)
	I1202 20:54:52.896403  743547 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem, removing ...
	I1202 20:54:52.896415  743547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem
	I1202 20:54:52.896450  743547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem (1123 bytes)
	I1202 20:54:52.896523  743547 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem, removing ...
	I1202 20:54:52.896534  743547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem
	I1202 20:54:52.896565  743547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem (1675 bytes)
	I1202 20:54:52.896627  743547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-992336 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-992336]
	I1202 20:54:53.042224  743547 provision.go:177] copyRemoteCerts
	I1202 20:54:53.042352  743547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:54:53.042421  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:53.066302  743547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/old-k8s-version-992336/id_rsa Username:docker}
	I1202 20:54:53.180785  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:54:53.215027  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1202 20:54:53.249137  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 20:54:53.276327  743547 provision.go:87] duration metric: took 408.04457ms to configureAuth
	I1202 20:54:53.276364  743547 ubuntu.go:206] setting minikube options for container-runtime
	I1202 20:54:53.276661  743547 config.go:182] Loaded profile config "old-k8s-version-992336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1202 20:54:53.276881  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:53.305450  743547 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:53.305788  743547 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1202 20:54:53.305819  743547 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:54:53.745248  743547 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:54:53.745280  743547 machine.go:97] duration metric: took 4.468788993s to provisionDockerMachine
	I1202 20:54:53.745299  743547 start.go:293] postStartSetup for "old-k8s-version-992336" (driver="docker")
	I1202 20:54:53.745313  743547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:54:53.745402  743547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:54:53.745451  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:53.773838  743547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/old-k8s-version-992336/id_rsa Username:docker}
	I1202 20:54:53.877082  743547 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:54:53.881285  743547 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:54:53.881316  743547 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:54:53.881332  743547 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 20:54:53.881412  743547 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 20:54:53.881515  743547 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem -> 4110322.pem in /etc/ssl/certs
	I1202 20:54:53.881673  743547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:54:53.890517  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:54:53.911353  743547 start.go:296] duration metric: took 166.0361ms for postStartSetup
	I1202 20:54:53.911460  743547 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:54:53.911513  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:53.934180  743547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/old-k8s-version-992336/id_rsa Username:docker}
	I1202 20:54:54.034877  743547 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:54:54.040410  743547 fix.go:56] duration metric: took 5.146736871s for fixHost
	I1202 20:54:54.040443  743547 start.go:83] releasing machines lock for "old-k8s-version-992336", held for 5.146795457s
	I1202 20:54:54.040529  743547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-992336
	I1202 20:54:54.060426  743547 ssh_runner.go:195] Run: cat /version.json
	I1202 20:54:54.060485  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:54.060496  743547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:54:54.060573  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:54.082901  743547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/old-k8s-version-992336/id_rsa Username:docker}
	I1202 20:54:54.082948  743547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/old-k8s-version-992336/id_rsa Username:docker}
	I1202 20:54:54.182659  743547 ssh_runner.go:195] Run: systemctl --version
	I1202 20:54:54.241255  743547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:54:54.279690  743547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:54:54.284969  743547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:54:54.285109  743547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:54:54.294313  743547 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 20:54:54.294343  743547 start.go:496] detecting cgroup driver to use...
	I1202 20:54:54.294378  743547 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 20:54:54.294431  743547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:54:54.311476  743547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:54:54.325741  743547 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:54:54.325809  743547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:54:54.342382  743547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:54:54.356905  743547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:54:54.449514  743547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:54:54.540100  743547 docker.go:234] disabling docker service ...
	I1202 20:54:54.540175  743547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:54:54.557954  743547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:54:54.575642  743547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:54:54.677171  743547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:54:54.787938  743547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:54:54.805380  743547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:54:54.824665  743547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1202 20:54:54.824729  743547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:54.837044  743547 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 20:54:54.837142  743547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:54.849210  743547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:54.860907  743547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:54.871629  743547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:54:54.882082  743547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:54.893928  743547 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:54.905219  743547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:54.917032  743547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:54:54.927659  743547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:54:54.938429  743547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:54:55.059022  743547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:54:55.238974  743547 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:54:55.239099  743547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:54:55.245135  743547 start.go:564] Will wait 60s for crictl version
	I1202 20:54:55.245210  743547 ssh_runner.go:195] Run: which crictl
	I1202 20:54:55.250232  743547 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:54:55.282324  743547 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:54:55.282412  743547 ssh_runner.go:195] Run: crio --version
	I1202 20:54:55.320935  743547 ssh_runner.go:195] Run: crio --version
	I1202 20:54:55.361998  743547 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1202 20:54:52.527997  736301 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.583967669s
	I1202 20:54:53.622779  736301 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.678806737s
	I1202 20:54:55.446643  736301 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502546315s
	I1202 20:54:55.467578  736301 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 20:54:55.486539  736301 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 20:54:55.505049  736301 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 20:54:55.505398  736301 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-997805 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 20:54:55.516932  736301 kubeadm.go:319] [bootstrap-token] Using token: clatot.hc48jyk0hvxonz06
	I1202 20:54:51.758445  744523 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1202 20:54:51.758787  744523 start.go:159] libmachine.API.Create for "newest-cni-245604" (driver="docker")
	I1202 20:54:51.758834  744523 client.go:173] LocalClient.Create starting
	I1202 20:54:51.758936  744523 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem
	I1202 20:54:51.759008  744523 main.go:143] libmachine: Decoding PEM data...
	I1202 20:54:51.759032  744523 main.go:143] libmachine: Parsing certificate...
	I1202 20:54:51.759118  744523 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem
	I1202 20:54:51.759148  744523 main.go:143] libmachine: Decoding PEM data...
	I1202 20:54:51.759171  744523 main.go:143] libmachine: Parsing certificate...
	I1202 20:54:51.759637  744523 cli_runner.go:164] Run: docker network inspect newest-cni-245604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 20:54:51.781898  744523 cli_runner.go:211] docker network inspect newest-cni-245604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 20:54:51.781982  744523 network_create.go:284] running [docker network inspect newest-cni-245604] to gather additional debugging logs...
	I1202 20:54:51.782006  744523 cli_runner.go:164] Run: docker network inspect newest-cni-245604
	W1202 20:54:51.801637  744523 cli_runner.go:211] docker network inspect newest-cni-245604 returned with exit code 1
	I1202 20:54:51.801678  744523 network_create.go:287] error running [docker network inspect newest-cni-245604]: docker network inspect newest-cni-245604: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-245604 not found
	I1202 20:54:51.801697  744523 network_create.go:289] output of [docker network inspect newest-cni-245604]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-245604 not found
	
	** /stderr **
	I1202 20:54:51.801890  744523 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:54:51.824870  744523 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-acf081edf266 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:04:c0:60:47:62} reservation:<nil>}
	I1202 20:54:51.825911  744523 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9623a21fb225 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:fc:8b:40:15:1b} reservation:<nil>}
	I1202 20:54:51.826609  744523 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f2b79e7e26a5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:c7:f4:38:1c:32} reservation:<nil>}
	I1202 20:54:51.827584  744523 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-be4fb772701b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:87:5f:38:96:b7} reservation:<nil>}
	I1202 20:54:51.828542  744523 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-13fe483902b9 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:a2:a4:21:b2:62:5a} reservation:<nil>}
	I1202 20:54:51.829195  744523 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-65ab470fa0e2 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:16:23:28:7c:c5:24} reservation:<nil>}
	I1202 20:54:51.830231  744523 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed3d00}
	I1202 20:54:51.830266  744523 network_create.go:124] attempt to create docker network newest-cni-245604 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1202 20:54:51.830316  744523 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-245604 newest-cni-245604
	I1202 20:54:51.887973  744523 network_create.go:108] docker network newest-cni-245604 192.168.103.0/24 created
	I1202 20:54:51.888023  744523 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-245604" container
	I1202 20:54:51.888128  744523 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 20:54:51.909991  744523 cli_runner.go:164] Run: docker volume create newest-cni-245604 --label name.minikube.sigs.k8s.io=newest-cni-245604 --label created_by.minikube.sigs.k8s.io=true
	I1202 20:54:51.933849  744523 oci.go:103] Successfully created a docker volume newest-cni-245604
	I1202 20:54:51.933969  744523 cli_runner.go:164] Run: docker run --rm --name newest-cni-245604-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-245604 --entrypoint /usr/bin/test -v newest-cni-245604:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 20:54:52.386347  744523 oci.go:107] Successfully prepared a docker volume newest-cni-245604
	I1202 20:54:52.386442  744523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1202 20:54:52.386653  744523 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1202 20:54:52.386714  744523 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1202 20:54:52.386763  744523 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 20:54:52.468472  744523 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-245604 --name newest-cni-245604 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-245604 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-245604 --network newest-cni-245604 --ip 192.168.103.2 --volume newest-cni-245604:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 20:54:52.834787  744523 cli_runner.go:164] Run: docker container inspect newest-cni-245604 --format={{.State.Running}}
	I1202 20:54:52.859568  744523 cli_runner.go:164] Run: docker container inspect newest-cni-245604 --format={{.State.Status}}
	I1202 20:54:52.888318  744523 cli_runner.go:164] Run: docker exec newest-cni-245604 stat /var/lib/dpkg/alternatives/iptables
	I1202 20:54:52.947034  744523 oci.go:144] the created container "newest-cni-245604" has a running status.
	I1202 20:54:52.947106  744523 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa...
	I1202 20:54:53.161566  744523 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 20:54:53.197985  744523 cli_runner.go:164] Run: docker container inspect newest-cni-245604 --format={{.State.Status}}
	I1202 20:54:53.229219  744523 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 20:54:53.229249  744523 kic_runner.go:114] Args: [docker exec --privileged newest-cni-245604 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 20:54:53.293954  744523 cli_runner.go:164] Run: docker container inspect newest-cni-245604 --format={{.State.Status}}
	I1202 20:54:53.319791  744523 machine.go:94] provisionDockerMachine start ...
	I1202 20:54:53.319987  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:53.347829  744523 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:53.348214  744523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1202 20:54:53.348237  744523 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:54:53.514601  744523 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-245604
	
	I1202 20:54:53.514632  744523 ubuntu.go:182] provisioning hostname "newest-cni-245604"
	I1202 20:54:53.514706  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:53.543984  744523 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:53.544329  744523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1202 20:54:53.544354  744523 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-245604 && echo "newest-cni-245604" | sudo tee /etc/hostname
	I1202 20:54:53.729217  744523 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-245604
	
	I1202 20:54:53.729302  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:53.755581  744523 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:53.755911  744523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1202 20:54:53.755944  744523 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-245604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-245604/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-245604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:54:53.904745  744523 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:54:53.904773  744523 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-407427/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-407427/.minikube}
	I1202 20:54:53.904818  744523 ubuntu.go:190] setting up certificates
	I1202 20:54:53.904831  744523 provision.go:84] configureAuth start
	I1202 20:54:53.904887  744523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-245604
	I1202 20:54:53.926340  744523 provision.go:143] copyHostCerts
	I1202 20:54:53.926412  744523 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem, removing ...
	I1202 20:54:53.926426  744523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem
	I1202 20:54:53.926508  744523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem (1082 bytes)
	I1202 20:54:53.926637  744523 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem, removing ...
	I1202 20:54:53.926646  744523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem
	I1202 20:54:53.926677  744523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem (1123 bytes)
	I1202 20:54:53.926741  744523 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem, removing ...
	I1202 20:54:53.926749  744523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem
	I1202 20:54:53.926776  744523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem (1675 bytes)
	I1202 20:54:53.926832  744523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem org=jenkins.newest-cni-245604 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-245604]
	I1202 20:54:54.033669  744523 provision.go:177] copyRemoteCerts
	I1202 20:54:54.033748  744523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:54:54.033805  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:54.055356  744523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:54:54.161586  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:54:54.183507  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 20:54:54.203578  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 20:54:54.223521  744523 provision.go:87] duration metric: took 318.655712ms to configureAuth
	I1202 20:54:54.223562  744523 ubuntu.go:206] setting minikube options for container-runtime
	I1202 20:54:54.223787  744523 config.go:182] Loaded profile config "newest-cni-245604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:54:54.223932  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:54.243976  744523 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:54.244266  744523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1202 20:54:54.244285  744523 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:54:54.563270  744523 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:54:54.563301  744523 machine.go:97] duration metric: took 1.243461731s to provisionDockerMachine
	I1202 20:54:54.563315  744523 client.go:176] duration metric: took 2.804467588s to LocalClient.Create
	I1202 20:54:54.563333  744523 start.go:167] duration metric: took 2.804549056s to libmachine.API.Create "newest-cni-245604"
	I1202 20:54:54.563343  744523 start.go:293] postStartSetup for "newest-cni-245604" (driver="docker")
	I1202 20:54:54.563359  744523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:54:54.563434  744523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:54:54.563487  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:54.587633  744523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:54:54.704139  744523 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:54:54.711871  744523 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:54:54.711907  744523 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:54:54.711923  744523 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 20:54:54.711998  744523 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 20:54:54.712158  744523 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem -> 4110322.pem in /etc/ssl/certs
	I1202 20:54:54.712308  744523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:54:54.727333  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:54:54.756096  744523 start.go:296] duration metric: took 192.737221ms for postStartSetup
	I1202 20:54:54.756539  744523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-245604
	I1202 20:54:54.779332  744523 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/config.json ...
	I1202 20:54:54.779682  744523 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:54:54.779734  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:54.804251  744523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:54:54.909217  744523 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:54:54.915212  744523 start.go:128] duration metric: took 3.160001099s to createHost
	I1202 20:54:54.915249  744523 start.go:83] releasing machines lock for "newest-cni-245604", held for 3.160217279s
	I1202 20:54:54.915329  744523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-245604
	I1202 20:54:54.939674  744523 ssh_runner.go:195] Run: cat /version.json
	I1202 20:54:54.939748  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:54.939782  744523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:54:54.939880  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:54.964142  744523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:54:54.965218  744523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:54:55.150195  744523 ssh_runner.go:195] Run: systemctl --version
	I1202 20:54:55.159061  744523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:54:55.203041  744523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:54:55.209011  744523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:54:55.209128  744523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:54:55.242651  744523 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 20:54:55.242680  744523 start.go:496] detecting cgroup driver to use...
	I1202 20:54:55.242718  744523 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 20:54:55.242772  744523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:54:55.265988  744523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:54:55.283822  744523 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:54:55.283891  744523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:54:55.306452  744523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:54:55.330861  744523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:54:55.437811  744523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:54:55.558513  744523 docker.go:234] disabling docker service ...
	I1202 20:54:55.558591  744523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:54:55.580602  744523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:54:55.596697  744523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:54:55.714954  744523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:54:55.820710  744523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:54:55.834948  744523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:54:55.852971  744523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:54:55.853038  744523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:55.866995  744523 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 20:54:55.867101  744523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:55.884788  744523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:55.901200  744523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:55.918342  744523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:54:55.928191  744523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:55.938885  744523 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:55.955266  744523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:55.965380  744523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:54:55.974592  744523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:54:55.983203  744523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:54:56.089565  744523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:54:56.246748  744523 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:54:56.246822  744523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:54:56.251650  744523 start.go:564] Will wait 60s for crictl version
	I1202 20:54:56.251725  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:56.259643  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:54:56.294960  744523 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:54:56.295118  744523 ssh_runner.go:195] Run: crio --version
	I1202 20:54:56.335315  744523 ssh_runner.go:195] Run: crio --version
	I1202 20:54:56.375510  744523 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 20:54:56.376891  744523 cli_runner.go:164] Run: docker network inspect newest-cni-245604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:54:56.404101  744523 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1202 20:54:56.410059  744523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:54:56.428224  744523 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1202 20:54:55.363273  743547 cli_runner.go:164] Run: docker network inspect old-k8s-version-992336 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:54:55.391463  743547 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1202 20:54:55.395875  743547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:54:55.407541  743547 kubeadm.go:884] updating cluster {Name:old-k8s-version-992336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-992336 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:54:55.407687  743547 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1202 20:54:55.407752  743547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:54:55.448888  743547 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:54:55.448914  743547 crio.go:433] Images already preloaded, skipping extraction
	I1202 20:54:55.448981  743547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:54:55.488955  743547 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:54:55.488987  743547 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:54:55.488997  743547 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 crio true true} ...
	I1202 20:54:55.489187  743547 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-992336 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-992336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:54:55.489281  743547 ssh_runner.go:195] Run: crio config
	I1202 20:54:55.555002  743547 cni.go:84] Creating CNI manager for ""
	I1202 20:54:55.555029  743547 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:54:55.555046  743547 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:54:55.555089  743547 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-992336 NodeName:old-k8s-version-992336 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:54:55.555302  743547 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-992336"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:54:55.555391  743547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1202 20:54:55.564702  743547 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:54:55.564796  743547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:54:55.574017  743547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1202 20:54:55.590044  743547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:54:55.607238  743547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1202 20:54:55.624302  743547 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1202 20:54:55.629565  743547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:54:55.647331  743547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:54:55.746705  743547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:54:55.778223  743547 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336 for IP: 192.168.94.2
	I1202 20:54:55.778263  743547 certs.go:195] generating shared ca certs ...
	I1202 20:54:55.778286  743547 certs.go:227] acquiring lock for ca certs: {Name:mkea11924193dd4dc459e16d70207bde383196df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:54:55.778470  743547 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key
	I1202 20:54:55.778540  743547 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key
	I1202 20:54:55.778555  743547 certs.go:257] generating profile certs ...
	I1202 20:54:55.778691  743547 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336/client.key
	I1202 20:54:55.778774  743547 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336/apiserver.key.26e20487
	I1202 20:54:55.778826  743547 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336/proxy-client.key
	I1202 20:54:55.778974  743547 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem (1338 bytes)
	W1202 20:54:55.779023  743547 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032_empty.pem, impossibly tiny 0 bytes
	I1202 20:54:55.779039  743547 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 20:54:55.779165  743547 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:54:55.779217  743547 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:54:55.779265  743547 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem (1675 bytes)
	I1202 20:54:55.779335  743547 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:54:55.780235  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:54:55.803356  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:54:55.826463  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:54:55.847561  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:54:55.875979  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1202 20:54:55.904532  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 20:54:55.931492  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:54:55.951900  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 20:54:55.972640  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:54:55.992667  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem --> /usr/share/ca-certificates/411032.pem (1338 bytes)
	I1202 20:54:56.015555  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /usr/share/ca-certificates/4110322.pem (1708 bytes)
	I1202 20:54:56.042035  743547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:54:56.059891  743547 ssh_runner.go:195] Run: openssl version
	I1202 20:54:56.068335  743547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:54:56.079667  743547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:54:56.085893  743547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:54:56.085977  743547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:54:56.143330  743547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:54:56.156665  743547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/411032.pem && ln -fs /usr/share/ca-certificates/411032.pem /etc/ssl/certs/411032.pem"
	I1202 20:54:56.169457  743547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/411032.pem
	I1202 20:54:56.174154  743547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 20:13 /usr/share/ca-certificates/411032.pem
	I1202 20:54:56.174225  743547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/411032.pem
	I1202 20:54:56.213730  743547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/411032.pem /etc/ssl/certs/51391683.0"
	I1202 20:54:56.223332  743547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110322.pem && ln -fs /usr/share/ca-certificates/4110322.pem /etc/ssl/certs/4110322.pem"
	I1202 20:54:56.233176  743547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110322.pem
	I1202 20:54:56.237408  743547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 20:13 /usr/share/ca-certificates/4110322.pem
	I1202 20:54:56.237477  743547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110322.pem
	I1202 20:54:56.290593  743547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4110322.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:54:56.304474  743547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:54:56.310604  743547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 20:54:56.360515  743547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 20:54:56.413594  743547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 20:54:56.475091  743547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 20:54:56.542472  743547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 20:54:56.584464  743547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 20:54:56.628756  743547 kubeadm.go:401] StartCluster: {Name:old-k8s-version-992336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-992336 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:54:56.628871  743547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:54:56.628955  743547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:54:56.671457  743547 cri.go:89] found id: "b1921b3926c4fba551a94a0ec78b54be832b8754401c93ba491ed82e1b71e6be"
	I1202 20:54:56.671542  743547 cri.go:89] found id: "e1e39d0565d3822bf2f251fdb0e8de5f07938ae3aad30710f3eb435ed8294864"
	I1202 20:54:56.671588  743547 cri.go:89] found id: "b30d0a318021ad78d96505cbec12dab08e463997373813e56adc6e14d585834d"
	I1202 20:54:56.671610  743547 cri.go:89] found id: "670db3462ea1c5beb2d55dfd0859b3df17a3bf33ad117a56693583fcb4ccdd66"
	I1202 20:54:56.671636  743547 cri.go:89] found id: ""
	I1202 20:54:56.671705  743547 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 20:54:56.690130  743547 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:54:56Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:54:56.690230  743547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:54:56.708246  743547 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 20:54:56.708273  743547 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 20:54:56.708319  743547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 20:54:56.720174  743547 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:54:56.721412  743547 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-992336" does not appear in /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:54:56.721919  743547 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-407427/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-992336" cluster setting kubeconfig missing "old-k8s-version-992336" context setting]
	I1202 20:54:56.723060  743547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:54:56.725527  743547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 20:54:56.740149  743547 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1202 20:54:56.740191  743547 kubeadm.go:602] duration metric: took 31.910169ms to restartPrimaryControlPlane
	I1202 20:54:56.740203  743547 kubeadm.go:403] duration metric: took 111.45868ms to StartCluster
	I1202 20:54:56.740224  743547 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:54:56.740303  743547 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:54:56.741496  743547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:54:56.741802  743547 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:54:56.742098  743547 config.go:182] Loaded profile config "old-k8s-version-992336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1202 20:54:56.742170  743547 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:54:56.742263  743547 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-992336"
	I1202 20:54:56.742288  743547 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-992336"
	W1202 20:54:56.742297  743547 addons.go:248] addon storage-provisioner should already be in state true
	I1202 20:54:56.742330  743547 host.go:66] Checking if "old-k8s-version-992336" exists ...
	I1202 20:54:56.742855  743547 cli_runner.go:164] Run: docker container inspect old-k8s-version-992336 --format={{.State.Status}}
	I1202 20:54:56.742984  743547 addons.go:70] Setting dashboard=true in profile "old-k8s-version-992336"
	I1202 20:54:56.743010  743547 addons.go:239] Setting addon dashboard=true in "old-k8s-version-992336"
	W1202 20:54:56.743021  743547 addons.go:248] addon dashboard should already be in state true
	I1202 20:54:56.743017  743547 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-992336"
	I1202 20:54:56.743057  743547 host.go:66] Checking if "old-k8s-version-992336" exists ...
	I1202 20:54:56.743058  743547 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-992336"
	I1202 20:54:56.743415  743547 cli_runner.go:164] Run: docker container inspect old-k8s-version-992336 --format={{.State.Status}}
	I1202 20:54:56.743565  743547 cli_runner.go:164] Run: docker container inspect old-k8s-version-992336 --format={{.State.Status}}
	I1202 20:54:56.747183  743547 out.go:179] * Verifying Kubernetes components...
	I1202 20:54:56.751095  743547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:54:56.779215  743547 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:56.779222  743547 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 20:54:56.780910  743547 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:54:56.780933  743547 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 20:54:56.780934  743547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:54:55.518402  736301 out.go:252]   - Configuring RBAC rules ...
	I1202 20:54:55.518551  736301 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 20:54:55.525177  736301 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 20:54:55.532974  736301 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 20:54:55.536672  736301 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 20:54:55.540648  736301 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 20:54:55.544671  736301 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 20:54:55.854962  736301 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 20:54:56.282748  736301 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 20:54:56.855924  736301 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 20:54:56.858599  736301 kubeadm.go:319] 
	I1202 20:54:56.858728  736301 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 20:54:56.858735  736301 kubeadm.go:319] 
	I1202 20:54:56.858833  736301 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 20:54:56.858838  736301 kubeadm.go:319] 
	I1202 20:54:56.858870  736301 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 20:54:56.858943  736301 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 20:54:56.859016  736301 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 20:54:56.859022  736301 kubeadm.go:319] 
	I1202 20:54:56.859103  736301 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 20:54:56.859109  736301 kubeadm.go:319] 
	I1202 20:54:56.859165  736301 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 20:54:56.859178  736301 kubeadm.go:319] 
	I1202 20:54:56.859235  736301 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 20:54:56.859323  736301 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 20:54:56.859397  736301 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 20:54:56.859403  736301 kubeadm.go:319] 
	I1202 20:54:56.859502  736301 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 20:54:56.859589  736301 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 20:54:56.859596  736301 kubeadm.go:319] 
	I1202 20:54:56.859693  736301 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token clatot.hc48jyk0hvxonz06 \
	I1202 20:54:56.859818  736301 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:48349ffa2369b87cd15480ece56edb1c5c5d97a828930f9dfbf1d1ceccf80ff4 \
	I1202 20:54:56.859842  736301 kubeadm.go:319] 	--control-plane 
	I1202 20:54:56.859847  736301 kubeadm.go:319] 
	I1202 20:54:56.859939  736301 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 20:54:56.859945  736301 kubeadm.go:319] 
	I1202 20:54:56.860051  736301 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token clatot.hc48jyk0hvxonz06 \
	I1202 20:54:56.860179  736301 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:48349ffa2369b87cd15480ece56edb1c5c5d97a828930f9dfbf1d1ceccf80ff4 
	I1202 20:54:56.865687  736301 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1202 20:54:56.865923  736301 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 20:54:56.865962  736301 cni.go:84] Creating CNI manager for ""
	I1202 20:54:56.865975  736301 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:54:56.868615  736301 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1202 20:54:55.389753  727677 node_ready.go:57] node "no-preload-336331" has "Ready":"False" status (will retry)
	W1202 20:54:57.391499  727677 node_ready.go:57] node "no-preload-336331" has "Ready":"False" status (will retry)
	I1202 20:54:57.889990  727677 node_ready.go:49] node "no-preload-336331" is "Ready"
	I1202 20:54:57.890026  727677 node_ready.go:38] duration metric: took 13.504157695s for node "no-preload-336331" to be "Ready" ...
	I1202 20:54:57.890044  727677 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:54:57.890144  727677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:54:57.912775  727677 api_server.go:72] duration metric: took 13.890609716s to wait for apiserver process to appear ...
	I1202 20:54:57.912809  727677 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:54:57.912934  727677 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1202 20:54:57.923648  727677 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1202 20:54:57.925968  727677 api_server.go:141] control plane version: v1.35.0-beta.0
	I1202 20:54:57.926004  727677 api_server.go:131] duration metric: took 13.121364ms to wait for apiserver health ...
	I1202 20:54:57.926015  727677 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:54:57.930714  727677 system_pods.go:59] 8 kube-system pods found
	I1202 20:54:57.930823  727677 system_pods.go:61] "coredns-7d764666f9-ghxk6" [1696ea67-a1db-437c-bada-07c12d4e9fc8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:54:57.930836  727677 system_pods.go:61] "etcd-no-preload-336331" [7e4664de-2a98-4d1e-911f-2cb479f4a42c] Running
	I1202 20:54:57.930844  727677 system_pods.go:61] "kindnet-5blk7" [8fc0ac0d-1c00-4916-99fd-a1b4e6eec75e] Running
	I1202 20:54:57.930851  727677 system_pods.go:61] "kube-apiserver-no-preload-336331" [09086c71-7e4a-40ce-b450-3a3a76d2b092] Running
	I1202 20:54:57.930880  727677 system_pods.go:61] "kube-controller-manager-no-preload-336331" [d556ac70-884a-46d0-aa2d-4fbd065aa125] Running
	I1202 20:54:57.930886  727677 system_pods.go:61] "kube-proxy-qc2v9" [91426b3b-e557-4959-91b3-cb5e256351ac] Running
	I1202 20:54:57.930901  727677 system_pods.go:61] "kube-scheduler-no-preload-336331" [b648b0ee-a3d0-41d2-93b9-fe72216bcec3] Running
	I1202 20:54:57.930910  727677 system_pods.go:61] "storage-provisioner" [e3c38dcd-7f1f-4382-bf82-b09cde780bdb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:54:57.930921  727677 system_pods.go:74] duration metric: took 4.81671ms to wait for pod list to return data ...
	I1202 20:54:57.930933  727677 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:54:57.934602  727677 default_sa.go:45] found service account: "default"
	I1202 20:54:57.934629  727677 default_sa.go:55] duration metric: took 3.687516ms for default service account to be created ...
	I1202 20:54:57.934641  727677 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 20:54:57.939126  727677 system_pods.go:86] 8 kube-system pods found
	I1202 20:54:57.939176  727677 system_pods.go:89] "coredns-7d764666f9-ghxk6" [1696ea67-a1db-437c-bada-07c12d4e9fc8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:54:57.939186  727677 system_pods.go:89] "etcd-no-preload-336331" [7e4664de-2a98-4d1e-911f-2cb479f4a42c] Running
	I1202 20:54:57.939194  727677 system_pods.go:89] "kindnet-5blk7" [8fc0ac0d-1c00-4916-99fd-a1b4e6eec75e] Running
	I1202 20:54:57.939200  727677 system_pods.go:89] "kube-apiserver-no-preload-336331" [09086c71-7e4a-40ce-b450-3a3a76d2b092] Running
	I1202 20:54:57.939207  727677 system_pods.go:89] "kube-controller-manager-no-preload-336331" [d556ac70-884a-46d0-aa2d-4fbd065aa125] Running
	I1202 20:54:57.939212  727677 system_pods.go:89] "kube-proxy-qc2v9" [91426b3b-e557-4959-91b3-cb5e256351ac] Running
	I1202 20:54:57.939217  727677 system_pods.go:89] "kube-scheduler-no-preload-336331" [b648b0ee-a3d0-41d2-93b9-fe72216bcec3] Running
	I1202 20:54:57.939225  727677 system_pods.go:89] "storage-provisioner" [e3c38dcd-7f1f-4382-bf82-b09cde780bdb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:54:57.939256  727677 retry.go:31] will retry after 254.058998ms: missing components: kube-dns
	I1202 20:54:58.199625  727677 system_pods.go:86] 8 kube-system pods found
	I1202 20:54:58.199671  727677 system_pods.go:89] "coredns-7d764666f9-ghxk6" [1696ea67-a1db-437c-bada-07c12d4e9fc8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:54:58.199680  727677 system_pods.go:89] "etcd-no-preload-336331" [7e4664de-2a98-4d1e-911f-2cb479f4a42c] Running
	I1202 20:54:58.199689  727677 system_pods.go:89] "kindnet-5blk7" [8fc0ac0d-1c00-4916-99fd-a1b4e6eec75e] Running
	I1202 20:54:58.199696  727677 system_pods.go:89] "kube-apiserver-no-preload-336331" [09086c71-7e4a-40ce-b450-3a3a76d2b092] Running
	I1202 20:54:58.199703  727677 system_pods.go:89] "kube-controller-manager-no-preload-336331" [d556ac70-884a-46d0-aa2d-4fbd065aa125] Running
	I1202 20:54:58.199708  727677 system_pods.go:89] "kube-proxy-qc2v9" [91426b3b-e557-4959-91b3-cb5e256351ac] Running
	I1202 20:54:58.199713  727677 system_pods.go:89] "kube-scheduler-no-preload-336331" [b648b0ee-a3d0-41d2-93b9-fe72216bcec3] Running
	I1202 20:54:58.199722  727677 system_pods.go:89] "storage-provisioner" [e3c38dcd-7f1f-4382-bf82-b09cde780bdb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:54:58.199742  727677 retry.go:31] will retry after 342.156745ms: missing components: kube-dns
	I1202 20:54:56.780993  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:56.782584  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 20:54:56.782619  743547 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 20:54:56.782691  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:56.784631  743547 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-992336"
	W1202 20:54:56.784664  743547 addons.go:248] addon default-storageclass should already be in state true
	I1202 20:54:56.784697  743547 host.go:66] Checking if "old-k8s-version-992336" exists ...
	I1202 20:54:56.786161  743547 cli_runner.go:164] Run: docker container inspect old-k8s-version-992336 --format={{.State.Status}}
	I1202 20:54:56.831348  743547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/old-k8s-version-992336/id_rsa Username:docker}
	I1202 20:54:56.838761  743547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/old-k8s-version-992336/id_rsa Username:docker}
	I1202 20:54:56.839118  743547 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:54:56.839144  743547 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:54:56.839212  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:56.877157  743547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/old-k8s-version-992336/id_rsa Username:docker}
	I1202 20:54:57.000378  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 20:54:57.000478  743547 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 20:54:57.001473  743547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:54:57.051688  743547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:54:57.053612  743547 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-992336" to be "Ready" ...
	I1202 20:54:57.062772  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 20:54:57.062802  743547 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 20:54:57.099632  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 20:54:57.099665  743547 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 20:54:57.102715  743547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:54:57.128982  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 20:54:57.129013  743547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 20:54:57.151853  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 20:54:57.151871  743547 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 20:54:57.180800  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 20:54:57.180826  743547 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 20:54:57.207394  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 20:54:57.207423  743547 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 20:54:57.238669  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 20:54:57.238701  743547 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 20:54:57.264954  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:54:57.265009  743547 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 20:54:57.288116  743547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:54:59.263131  743547 node_ready.go:49] node "old-k8s-version-992336" is "Ready"
	I1202 20:54:59.263168  743547 node_ready.go:38] duration metric: took 2.209490941s for node "old-k8s-version-992336" to be "Ready" ...
	I1202 20:54:59.263187  743547 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:54:59.263244  743547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:55:00.033214  743547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.981484522s)
	I1202 20:55:00.033304  743547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.93055748s)
	I1202 20:55:00.490811  743547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.202644047s)
	I1202 20:55:00.490986  743547 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.227720068s)
	I1202 20:55:00.491022  743547 api_server.go:72] duration metric: took 3.749188411s to wait for apiserver process to appear ...
	I1202 20:55:00.491030  743547 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:55:00.491062  743547 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1202 20:55:00.493010  743547 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-992336 addons enable metrics-server
	
	I1202 20:55:00.494606  743547 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1202 20:54:58.547286  727677 system_pods.go:86] 8 kube-system pods found
	I1202 20:54:58.547327  727677 system_pods.go:89] "coredns-7d764666f9-ghxk6" [1696ea67-a1db-437c-bada-07c12d4e9fc8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:54:58.547335  727677 system_pods.go:89] "etcd-no-preload-336331" [7e4664de-2a98-4d1e-911f-2cb479f4a42c] Running
	I1202 20:54:58.547344  727677 system_pods.go:89] "kindnet-5blk7" [8fc0ac0d-1c00-4916-99fd-a1b4e6eec75e] Running
	I1202 20:54:58.547349  727677 system_pods.go:89] "kube-apiserver-no-preload-336331" [09086c71-7e4a-40ce-b450-3a3a76d2b092] Running
	I1202 20:54:58.547355  727677 system_pods.go:89] "kube-controller-manager-no-preload-336331" [d556ac70-884a-46d0-aa2d-4fbd065aa125] Running
	I1202 20:54:58.547359  727677 system_pods.go:89] "kube-proxy-qc2v9" [91426b3b-e557-4959-91b3-cb5e256351ac] Running
	I1202 20:54:58.547364  727677 system_pods.go:89] "kube-scheduler-no-preload-336331" [b648b0ee-a3d0-41d2-93b9-fe72216bcec3] Running
	I1202 20:54:58.547371  727677 system_pods.go:89] "storage-provisioner" [e3c38dcd-7f1f-4382-bf82-b09cde780bdb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:54:58.547389  727677 retry.go:31] will retry after 368.951031ms: missing components: kube-dns
	I1202 20:54:58.921450  727677 system_pods.go:86] 8 kube-system pods found
	I1202 20:54:58.921490  727677 system_pods.go:89] "coredns-7d764666f9-ghxk6" [1696ea67-a1db-437c-bada-07c12d4e9fc8] Running
	I1202 20:54:58.921499  727677 system_pods.go:89] "etcd-no-preload-336331" [7e4664de-2a98-4d1e-911f-2cb479f4a42c] Running
	I1202 20:54:58.921505  727677 system_pods.go:89] "kindnet-5blk7" [8fc0ac0d-1c00-4916-99fd-a1b4e6eec75e] Running
	I1202 20:54:58.921510  727677 system_pods.go:89] "kube-apiserver-no-preload-336331" [09086c71-7e4a-40ce-b450-3a3a76d2b092] Running
	I1202 20:54:58.921515  727677 system_pods.go:89] "kube-controller-manager-no-preload-336331" [d556ac70-884a-46d0-aa2d-4fbd065aa125] Running
	I1202 20:54:58.921520  727677 system_pods.go:89] "kube-proxy-qc2v9" [91426b3b-e557-4959-91b3-cb5e256351ac] Running
	I1202 20:54:58.921525  727677 system_pods.go:89] "kube-scheduler-no-preload-336331" [b648b0ee-a3d0-41d2-93b9-fe72216bcec3] Running
	I1202 20:54:58.921530  727677 system_pods.go:89] "storage-provisioner" [e3c38dcd-7f1f-4382-bf82-b09cde780bdb] Running
	I1202 20:54:58.921541  727677 system_pods.go:126] duration metric: took 986.887188ms to wait for k8s-apps to be running ...
	I1202 20:54:58.921550  727677 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 20:54:58.921604  727677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:54:58.936808  727677 system_svc.go:56] duration metric: took 15.220965ms WaitForService to wait for kubelet
	I1202 20:54:58.936842  727677 kubeadm.go:587] duration metric: took 14.914814409s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:54:58.936868  727677 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:54:58.940483  727677 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 20:54:58.940521  727677 node_conditions.go:123] node cpu capacity is 8
	I1202 20:54:58.940543  727677 node_conditions.go:105] duration metric: took 3.669091ms to run NodePressure ...
	I1202 20:54:58.940560  727677 start.go:242] waiting for startup goroutines ...
	I1202 20:54:58.940570  727677 start.go:247] waiting for cluster config update ...
	I1202 20:54:58.940582  727677 start.go:256] writing updated cluster config ...
	I1202 20:54:58.940940  727677 ssh_runner.go:195] Run: rm -f paused
	I1202 20:54:58.946442  727677 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:54:58.950994  727677 pod_ready.go:83] waiting for pod "coredns-7d764666f9-ghxk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:58.956333  727677 pod_ready.go:94] pod "coredns-7d764666f9-ghxk6" is "Ready"
	I1202 20:54:58.956362  727677 pod_ready.go:86] duration metric: took 5.338212ms for pod "coredns-7d764666f9-ghxk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:58.961022  727677 pod_ready.go:83] waiting for pod "etcd-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:58.967156  727677 pod_ready.go:94] pod "etcd-no-preload-336331" is "Ready"
	I1202 20:54:58.967197  727677 pod_ready.go:86] duration metric: took 6.143693ms for pod "etcd-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:58.970251  727677 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:58.975849  727677 pod_ready.go:94] pod "kube-apiserver-no-preload-336331" is "Ready"
	I1202 20:54:58.975894  727677 pod_ready.go:86] duration metric: took 5.606631ms for pod "kube-apiserver-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:58.979032  727677 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:59.351307  727677 pod_ready.go:94] pod "kube-controller-manager-no-preload-336331" is "Ready"
	I1202 20:54:59.351337  727677 pod_ready.go:86] duration metric: took 372.272976ms for pod "kube-controller-manager-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:59.552225  727677 pod_ready.go:83] waiting for pod "kube-proxy-qc2v9" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:59.951963  727677 pod_ready.go:94] pod "kube-proxy-qc2v9" is "Ready"
	I1202 20:54:59.952012  727677 pod_ready.go:86] duration metric: took 399.754386ms for pod "kube-proxy-qc2v9" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:00.151862  727677 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:00.551517  727677 pod_ready.go:94] pod "kube-scheduler-no-preload-336331" is "Ready"
	I1202 20:55:00.551567  727677 pod_ready.go:86] duration metric: took 399.673435ms for pod "kube-scheduler-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:00.551585  727677 pod_ready.go:40] duration metric: took 1.605104621s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:55:00.623116  727677 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1202 20:55:00.625337  727677 out.go:179] * Done! kubectl is now configured to use "no-preload-336331" cluster and "default" namespace by default
	I1202 20:54:56.429637  744523 kubeadm.go:884] updating cluster {Name:newest-cni-245604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-245604 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:54:56.429813  744523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 20:54:56.429873  744523 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:54:56.470335  744523 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1202 20:54:56.470367  744523 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1202 20:54:56.470443  744523 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:56.470709  744523 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:56.470835  744523 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:56.470944  744523 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:56.471113  744523 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:56.471227  744523 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1202 20:54:56.471312  744523 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:56.471416  744523 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:56.474235  744523 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1202 20:54:56.474674  744523 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:56.474720  744523 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:56.474716  744523 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:56.474788  744523 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:56.475527  744523 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:56.475871  744523 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:56.476514  744523 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:56.627881  744523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:56.635408  744523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:56.645721  744523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:56.656260  744523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:56.665724  744523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:56.674018  744523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1202 20:54:56.686804  744523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:56.690583  744523 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1202 20:54:56.690704  744523 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:56.690760  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:56.707645  744523 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1202 20:54:56.707701  744523 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:56.707771  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:56.729690  744523 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1202 20:54:56.729741  744523 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:56.729790  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:56.730634  744523 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1202 20:54:56.730670  744523 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:56.730712  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:56.748602  744523 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1202 20:54:56.748650  744523 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:56.748713  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:56.779664  744523 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1202 20:54:56.779729  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:56.779748  744523 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1202 20:54:56.779805  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:56.779817  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:56.779663  744523 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1202 20:54:56.779842  744523 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:56.779872  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:56.779878  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:56.779903  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:56.779731  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:56.877780  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:56.893301  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 20:54:56.893403  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:56.893456  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:56.893522  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:56.893577  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:56.893630  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:56.979424  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:56.979467  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:56.979427  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 20:54:56.979522  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:56.979694  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:56.979787  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:56.979870  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:57.063429  744523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1202 20:54:57.063525  744523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1202 20:54:57.063574  744523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1202 20:54:57.063635  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 20:54:57.063715  744523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1202 20:54:57.063773  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 20:54:57.063798  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 20:54:57.063529  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1202 20:54:57.073765  744523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1202 20:54:57.073970  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 20:54:57.073976  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:57.074150  744523 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1202 20:54:57.074177  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1202 20:54:57.074309  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1202 20:54:57.090729  744523 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1202 20:54:57.090765  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1202 20:54:57.090852  744523 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1202 20:54:57.090867  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1202 20:54:57.091043  744523 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1202 20:54:57.091207  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1202 20:54:57.151485  744523 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1202 20:54:57.151520  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1202 20:54:57.151798  744523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1202 20:54:57.151964  744523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1202 20:54:57.152031  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1202 20:54:57.152553  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 20:54:57.254451  744523 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1202 20:54:57.254502  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1202 20:54:57.257229  744523 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1202 20:54:57.257317  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1202 20:54:57.392528  744523 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1202 20:54:57.392642  744523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1202 20:54:57.810758  744523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:57.869494  744523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1202 20:54:57.869554  744523 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 20:54:57.869628  744523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 20:54:57.932920  744523 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1202 20:54:57.932975  744523 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:57.933024  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:59.294687  744523 ssh_runner.go:235] Completed: which crictl: (1.361639017s)
	I1202 20:54:59.294768  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:59.294838  744523 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.425189868s)
	I1202 20:54:59.294869  744523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1202 20:54:59.294918  744523 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 20:54:59.294967  744523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 20:55:00.817466  744523 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.522668777s)
	I1202 20:55:00.817551  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:55:00.817627  744523 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.522632151s)
	I1202 20:55:00.817648  744523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1202 20:55:00.817674  744523 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1202 20:55:00.817704  744523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1202 20:55:00.848635  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:56.870332  736301 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 20:54:56.877419  736301 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1202 20:54:56.877436  736301 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 20:54:56.902275  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 20:54:57.337788  736301 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 20:54:57.337991  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:54:57.338104  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-997805 minikube.k8s.io/updated_at=2025_12_02T20_54_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92 minikube.k8s.io/name=default-k8s-diff-port-997805 minikube.k8s.io/primary=true
	I1202 20:54:57.477817  736301 ops.go:34] apiserver oom_adj: -16
	I1202 20:54:57.477829  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:54:57.978414  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:54:58.478319  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:54:58.980154  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:54:59.478288  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:54:59.978296  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:00.478855  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:00.978336  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:01.478217  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:01.560150  736301 kubeadm.go:1114] duration metric: took 4.222209683s to wait for elevateKubeSystemPrivileges
	I1202 20:55:01.560198  736301 kubeadm.go:403] duration metric: took 16.697560258s to StartCluster
	I1202 20:55:01.560223  736301 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:01.560308  736301 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:01.561505  736301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:01.561778  736301 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:55:01.561831  736301 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:55:01.561928  736301 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:01.561953  736301 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:01.561973  736301 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-997805"
	I1202 20:55:01.561980  736301 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-997805"
	I1202 20:55:01.561813  736301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 20:55:01.562021  736301 config.go:182] Loaded profile config "default-k8s-diff-port-997805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:55:01.562004  736301 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:01.562425  736301 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:01.562664  736301 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:01.564706  736301 out.go:179] * Verifying Kubernetes components...
	I1202 20:55:01.566104  736301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:01.589813  736301 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-997805"
	I1202 20:55:01.589873  736301 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:01.590425  736301 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:01.590987  736301 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:55:01.592179  736301 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:01.592201  736301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:55:01.592270  736301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:01.619646  736301 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:01.619694  736301 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:55:01.619759  736301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:01.627920  736301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:01.654225  736301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:01.682285  736301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 20:55:01.736624  736301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:55:01.766566  736301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:01.788518  736301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:01.900235  736301 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1202 20:55:01.901603  736301 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-997805" to be "Ready" ...
	I1202 20:55:02.127286  736301 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1202 20:55:00.495919  743547 addons.go:530] duration metric: took 3.753750261s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1202 20:55:00.497622  743547 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1202 20:55:00.497666  743547 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1202 20:55:00.991191  743547 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1202 20:55:00.996136  743547 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1202 20:55:00.997346  743547 api_server.go:141] control plane version: v1.28.0
	I1202 20:55:00.997377  743547 api_server.go:131] duration metric: took 506.333183ms to wait for apiserver health ...
	I1202 20:55:00.997390  743547 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:55:01.001606  743547 system_pods.go:59] 8 kube-system pods found
	I1202 20:55:01.001663  743547 system_pods.go:61] "coredns-5dd5756b68-ptzsf" [14b9d2d2-4853-419f-ad27-5d6f4c9c7e2c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:55:01.001678  743547 system_pods.go:61] "etcd-old-k8s-version-992336" [22527607-8153-442e-97cb-93555cbcdd3a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:55:01.001689  743547 system_pods.go:61] "kindnet-jvmsp" [51a76a82-d4d0-4909-a7a7-49ad2e3fd9f0] Running
	I1202 20:55:01.001703  743547 system_pods.go:61] "kube-apiserver-old-k8s-version-992336" [5049999c-2987-49b7-ba74-9d7621b0759a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:55:01.001716  743547 system_pods.go:61] "kube-controller-manager-old-k8s-version-992336" [34f637f6-d1c4-4620-9705-439b4db0805a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:55:01.001727  743547 system_pods.go:61] "kube-proxy-qpzt8" [e7130e4a-3fd7-49ba-b6c6-ea6857c76765] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 20:55:01.001736  743547 system_pods.go:61] "kube-scheduler-old-k8s-version-992336" [c4e33a26-6df9-440c-9eff-9197bcdfd55c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:55:01.001748  743547 system_pods.go:61] "storage-provisioner" [398f9134-7016-4782-9541-255e9925dd8d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:55:01.001759  743547 system_pods.go:74] duration metric: took 4.359896ms to wait for pod list to return data ...
	I1202 20:55:01.001773  743547 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:55:01.004230  743547 default_sa.go:45] found service account: "default"
	I1202 20:55:01.004254  743547 default_sa.go:55] duration metric: took 2.473014ms for default service account to be created ...
	I1202 20:55:01.004265  743547 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 20:55:01.008022  743547 system_pods.go:86] 8 kube-system pods found
	I1202 20:55:01.008062  743547 system_pods.go:89] "coredns-5dd5756b68-ptzsf" [14b9d2d2-4853-419f-ad27-5d6f4c9c7e2c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:55:01.008112  743547 system_pods.go:89] "etcd-old-k8s-version-992336" [22527607-8153-442e-97cb-93555cbcdd3a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:55:01.008124  743547 system_pods.go:89] "kindnet-jvmsp" [51a76a82-d4d0-4909-a7a7-49ad2e3fd9f0] Running
	I1202 20:55:01.008135  743547 system_pods.go:89] "kube-apiserver-old-k8s-version-992336" [5049999c-2987-49b7-ba74-9d7621b0759a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:55:01.008173  743547 system_pods.go:89] "kube-controller-manager-old-k8s-version-992336" [34f637f6-d1c4-4620-9705-439b4db0805a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:55:01.008187  743547 system_pods.go:89] "kube-proxy-qpzt8" [e7130e4a-3fd7-49ba-b6c6-ea6857c76765] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 20:55:01.008197  743547 system_pods.go:89] "kube-scheduler-old-k8s-version-992336" [c4e33a26-6df9-440c-9eff-9197bcdfd55c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:55:01.008206  743547 system_pods.go:89] "storage-provisioner" [398f9134-7016-4782-9541-255e9925dd8d] Running
	I1202 20:55:01.008233  743547 system_pods.go:126] duration metric: took 3.944236ms to wait for k8s-apps to be running ...
	I1202 20:55:01.008249  743547 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 20:55:01.008306  743547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:55:01.025249  743547 system_svc.go:56] duration metric: took 16.988838ms WaitForService to wait for kubelet
	I1202 20:55:01.025289  743547 kubeadm.go:587] duration metric: took 4.283454748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:55:01.025313  743547 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:55:01.029446  743547 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 20:55:01.029479  743547 node_conditions.go:123] node cpu capacity is 8
	I1202 20:55:01.029504  743547 node_conditions.go:105] duration metric: took 4.184149ms to run NodePressure ...
	I1202 20:55:01.029523  743547 start.go:242] waiting for startup goroutines ...
	I1202 20:55:01.029535  743547 start.go:247] waiting for cluster config update ...
	I1202 20:55:01.029549  743547 start.go:256] writing updated cluster config ...
	I1202 20:55:01.029888  743547 ssh_runner.go:195] Run: rm -f paused
	I1202 20:55:01.034901  743547 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:55:01.039910  743547 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-ptzsf" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 20:55:03.046930  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	I1202 20:55:02.295814  744523 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.478083279s)
	I1202 20:55:02.295852  744523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1202 20:55:02.295876  744523 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1202 20:55:02.295882  744523 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.447208868s)
	I1202 20:55:02.295924  744523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1202 20:55:02.295933  744523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1202 20:55:02.296025  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1202 20:55:03.814698  744523 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.518744941s)
	I1202 20:55:03.814738  744523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1202 20:55:03.814764  744523 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 20:55:03.814810  744523 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.518762728s)
	I1202 20:55:03.814865  744523 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1202 20:55:03.814893  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1202 20:55:03.814817  744523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 20:55:04.925056  744523 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.110119383s)
	I1202 20:55:04.925120  744523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1202 20:55:04.925145  744523 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 20:55:04.925195  744523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 20:55:02.128586  736301 addons.go:530] duration metric: took 566.750529ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1202 20:55:02.404897  736301 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-997805" context rescaled to 1 replicas
	W1202 20:55:03.907516  736301 node_ready.go:57] node "default-k8s-diff-port-997805" has "Ready":"False" status (will retry)
	W1202 20:55:06.528176  736301 node_ready.go:57] node "default-k8s-diff-port-997805" has "Ready":"False" status (will retry)
	W1202 20:55:05.546607  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	W1202 20:55:08.053813  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	I1202 20:55:06.750340  744523 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.825116414s)
	I1202 20:55:06.750375  744523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1202 20:55:06.750420  744523 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1202 20:55:06.750473  744523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1202 20:55:07.327054  744523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1202 20:55:07.327129  744523 cache_images.go:125] Successfully loaded all cached images
	I1202 20:55:07.327138  744523 cache_images.go:94] duration metric: took 10.856753s to LoadCachedImages
	I1202 20:55:07.327165  744523 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1202 20:55:07.327304  744523 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-245604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-245604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:55:07.327405  744523 ssh_runner.go:195] Run: crio config
	I1202 20:55:07.379951  744523 cni.go:84] Creating CNI manager for ""
	I1202 20:55:07.379987  744523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:07.380012  744523 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1202 20:55:07.380052  744523 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-245604 NodeName:newest-cni-245604 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:55:07.380240  744523 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-245604"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:55:07.380326  744523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 20:55:07.391201  744523 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1202 20:55:07.391273  744523 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 20:55:07.401815  744523 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1202 20:55:07.401855  744523 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256
	I1202 20:55:07.401905  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1202 20:55:07.401953  744523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:55:07.401822  744523 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1202 20:55:07.402107  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1202 20:55:07.407476  744523 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1202 20:55:07.407517  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1202 20:55:07.407476  744523 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1202 20:55:07.407577  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1202 20:55:07.424591  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1202 20:55:07.473519  744523 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1202 20:55:07.473565  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1202 20:55:07.942534  744523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:55:07.951564  744523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1202 20:55:07.966391  744523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 20:55:07.983466  744523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1202 20:55:07.998388  744523 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1202 20:55:08.003218  744523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:08.014772  744523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:08.099183  744523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:55:08.128741  744523 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604 for IP: 192.168.103.2
	I1202 20:55:08.128766  744523 certs.go:195] generating shared ca certs ...
	I1202 20:55:08.128785  744523 certs.go:227] acquiring lock for ca certs: {Name:mkea11924193dd4dc459e16d70207bde383196df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:08.128953  744523 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key
	I1202 20:55:08.129005  744523 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key
	I1202 20:55:08.129016  744523 certs.go:257] generating profile certs ...
	I1202 20:55:08.129092  744523 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/client.key
	I1202 20:55:08.129113  744523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/client.crt with IP's: []
	I1202 20:55:08.294554  744523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/client.crt ...
	I1202 20:55:08.294593  744523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/client.crt: {Name:mk21b09addeeaa3d31317d267da0ba46cdbf969a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:08.294817  744523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/client.key ...
	I1202 20:55:08.294834  744523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/client.key: {Name:mke0819f820269a4f8de98b3294913aa1fec7fd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:08.294976  744523 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.key.b0e612d2
	I1202 20:55:08.295001  744523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.crt.b0e612d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1202 20:55:08.433583  744523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.crt.b0e612d2 ...
	I1202 20:55:08.433617  744523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.crt.b0e612d2: {Name:mkefebd269deae008218212f66f0a4f5a87aa20c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:08.433838  744523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.key.b0e612d2 ...
	I1202 20:55:08.433861  744523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.key.b0e612d2: {Name:mkd6ac856f0fd42299c25dbdfc17df9c0f05a80e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:08.433986  744523 certs.go:382] copying /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.crt.b0e612d2 -> /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.crt
	I1202 20:55:08.434083  744523 certs.go:386] copying /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.key.b0e612d2 -> /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.key
	I1202 20:55:08.434142  744523 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/proxy-client.key
	I1202 20:55:08.434160  744523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/proxy-client.crt with IP's: []
	I1202 20:55:08.761700  744523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/proxy-client.crt ...
	I1202 20:55:08.761739  744523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/proxy-client.crt: {Name:mk605b3a88d4c93e27b46e0a7f581a336524f65b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:08.761993  744523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/proxy-client.key ...
	I1202 20:55:08.762019  744523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/proxy-client.key: {Name:mk5be491623b73b348aa62d0bb88d46e4125409d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:08.762262  744523 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem (1338 bytes)
	W1202 20:55:08.762311  744523 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032_empty.pem, impossibly tiny 0 bytes
	I1202 20:55:08.762323  744523 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 20:55:08.762348  744523 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:55:08.762373  744523 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:55:08.762396  744523 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem (1675 bytes)
	I1202 20:55:08.762439  744523 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:55:08.762985  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:55:08.783273  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:55:08.806927  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:55:08.827696  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:55:08.848618  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 20:55:08.868662  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 20:55:08.891006  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:55:08.916236  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 20:55:08.936812  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:55:08.957826  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem --> /usr/share/ca-certificates/411032.pem (1338 bytes)
	I1202 20:55:08.977945  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /usr/share/ca-certificates/4110322.pem (1708 bytes)
	I1202 20:55:08.998361  744523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:55:09.013249  744523 ssh_runner.go:195] Run: openssl version
	I1202 20:55:09.019894  744523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/411032.pem && ln -fs /usr/share/ca-certificates/411032.pem /etc/ssl/certs/411032.pem"
	I1202 20:55:09.029459  744523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/411032.pem
	I1202 20:55:09.034032  744523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 20:13 /usr/share/ca-certificates/411032.pem
	I1202 20:55:09.034109  744523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/411032.pem
	I1202 20:55:09.072802  744523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/411032.pem /etc/ssl/certs/51391683.0"
	I1202 20:55:09.082258  744523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110322.pem && ln -fs /usr/share/ca-certificates/4110322.pem /etc/ssl/certs/4110322.pem"
	I1202 20:55:09.091835  744523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110322.pem
	I1202 20:55:09.096140  744523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 20:13 /usr/share/ca-certificates/4110322.pem
	I1202 20:55:09.096201  744523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110322.pem
	I1202 20:55:09.132108  744523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4110322.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:55:09.142782  744523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:55:09.153195  744523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:09.157649  744523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:09.157716  744523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:09.193211  744523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:55:09.203054  744523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:55:09.207632  744523 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 20:55:09.207698  744523 kubeadm.go:401] StartCluster: {Name:newest-cni-245604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-245604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:55:09.207798  744523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:55:09.207862  744523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:55:09.238428  744523 cri.go:89] found id: ""
	I1202 20:55:09.238499  744523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:55:09.248151  744523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 20:55:09.257704  744523 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 20:55:09.257786  744523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 20:55:09.266501  744523 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 20:55:09.266520  744523 kubeadm.go:158] found existing configuration files:
	
	I1202 20:55:09.266571  744523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 20:55:09.275990  744523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 20:55:09.276083  744523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 20:55:09.284543  744523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 20:55:09.293889  744523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 20:55:09.293983  744523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 20:55:09.302404  744523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 20:55:09.311315  744523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 20:55:09.311387  744523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 20:55:09.320016  744523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 20:55:09.329127  744523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 20:55:09.329200  744523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 20:55:09.337504  744523 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 20:55:09.450141  744523 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1202 20:55:09.526996  744523 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1202 20:55:08.905121  736301 node_ready.go:57] node "default-k8s-diff-port-997805" has "Ready":"False" status (will retry)
	W1202 20:55:11.405811  736301 node_ready.go:57] node "default-k8s-diff-port-997805" has "Ready":"False" status (will retry)
	W1202 20:55:10.546176  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	W1202 20:55:13.046772  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	I1202 20:55:13.409009  736301 node_ready.go:49] node "default-k8s-diff-port-997805" is "Ready"
	I1202 20:55:13.409043  736301 node_ready.go:38] duration metric: took 11.507409908s for node "default-k8s-diff-port-997805" to be "Ready" ...
	I1202 20:55:13.409060  736301 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:55:13.409144  736301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:55:13.428459  736301 api_server.go:72] duration metric: took 11.866557952s to wait for apiserver process to appear ...
	I1202 20:55:13.428518  736301 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:55:13.428546  736301 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1202 20:55:13.435123  736301 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1202 20:55:13.436462  736301 api_server.go:141] control plane version: v1.34.2
	I1202 20:55:13.436496  736301 api_server.go:131] duration metric: took 7.968671ms to wait for apiserver health ...
	I1202 20:55:13.436508  736301 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:55:13.441129  736301 system_pods.go:59] 8 kube-system pods found
	I1202 20:55:13.441171  736301 system_pods.go:61] "coredns-66bc5c9577-jrln7" [37de7399-6357-4f08-9240-fc9e0d884f47] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:55:13.441180  736301 system_pods.go:61] "etcd-default-k8s-diff-port-997805" [446321a2-6abf-495a-8198-da2e43d8c18d] Running
	I1202 20:55:13.441188  736301 system_pods.go:61] "kindnet-rzqpn" [eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb] Running
	I1202 20:55:13.441193  736301 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-997805" [04c7ea7b-9b4e-44ee-8148-107daccf6b7b] Running
	I1202 20:55:13.441205  736301 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-997805" [65af3257-fa45-4d8c-bab3-46c0390ab8df] Running
	I1202 20:55:13.441210  736301 system_pods.go:61] "kube-proxy-s2jpn" [407f6b3c-8d8b-47b0-b994-c061eedc6420] Running
	I1202 20:55:13.441215  736301 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-997805" [803a952f-54ba-4326-b70f-907fc7db368e] Running
	I1202 20:55:13.441222  736301 system_pods.go:61] "storage-provisioner" [08893b97-1192-4f4e-8636-8f2ba82c853d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:55:13.441235  736301 system_pods.go:74] duration metric: took 4.718273ms to wait for pod list to return data ...
	I1202 20:55:13.441248  736301 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:55:13.444692  736301 default_sa.go:45] found service account: "default"
	I1202 20:55:13.444725  736301 default_sa.go:55] duration metric: took 3.465464ms for default service account to be created ...
	I1202 20:55:13.444738  736301 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 20:55:13.449425  736301 system_pods.go:86] 8 kube-system pods found
	I1202 20:55:13.449465  736301 system_pods.go:89] "coredns-66bc5c9577-jrln7" [37de7399-6357-4f08-9240-fc9e0d884f47] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:55:13.449473  736301 system_pods.go:89] "etcd-default-k8s-diff-port-997805" [446321a2-6abf-495a-8198-da2e43d8c18d] Running
	I1202 20:55:13.449482  736301 system_pods.go:89] "kindnet-rzqpn" [eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb] Running
	I1202 20:55:13.449487  736301 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-997805" [04c7ea7b-9b4e-44ee-8148-107daccf6b7b] Running
	I1202 20:55:13.449493  736301 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-997805" [65af3257-fa45-4d8c-bab3-46c0390ab8df] Running
	I1202 20:55:13.449498  736301 system_pods.go:89] "kube-proxy-s2jpn" [407f6b3c-8d8b-47b0-b994-c061eedc6420] Running
	I1202 20:55:13.449504  736301 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-997805" [803a952f-54ba-4326-b70f-907fc7db368e] Running
	I1202 20:55:13.449512  736301 system_pods.go:89] "storage-provisioner" [08893b97-1192-4f4e-8636-8f2ba82c853d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:55:13.449542  736301 retry.go:31] will retry after 197.970445ms: missing components: kube-dns
	I1202 20:55:13.653108  736301 system_pods.go:86] 8 kube-system pods found
	I1202 20:55:13.653145  736301 system_pods.go:89] "coredns-66bc5c9577-jrln7" [37de7399-6357-4f08-9240-fc9e0d884f47] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:55:13.653161  736301 system_pods.go:89] "etcd-default-k8s-diff-port-997805" [446321a2-6abf-495a-8198-da2e43d8c18d] Running
	I1202 20:55:13.653170  736301 system_pods.go:89] "kindnet-rzqpn" [eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb] Running
	I1202 20:55:13.653176  736301 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-997805" [04c7ea7b-9b4e-44ee-8148-107daccf6b7b] Running
	I1202 20:55:13.653182  736301 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-997805" [65af3257-fa45-4d8c-bab3-46c0390ab8df] Running
	I1202 20:55:13.653187  736301 system_pods.go:89] "kube-proxy-s2jpn" [407f6b3c-8d8b-47b0-b994-c061eedc6420] Running
	I1202 20:55:13.653192  736301 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-997805" [803a952f-54ba-4326-b70f-907fc7db368e] Running
	I1202 20:55:13.653199  736301 system_pods.go:89] "storage-provisioner" [08893b97-1192-4f4e-8636-8f2ba82c853d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:55:13.653220  736301 retry.go:31] will retry after 312.600116ms: missing components: kube-dns
	I1202 20:55:13.971151  736301 system_pods.go:86] 8 kube-system pods found
	I1202 20:55:13.971209  736301 system_pods.go:89] "coredns-66bc5c9577-jrln7" [37de7399-6357-4f08-9240-fc9e0d884f47] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:55:13.971230  736301 system_pods.go:89] "etcd-default-k8s-diff-port-997805" [446321a2-6abf-495a-8198-da2e43d8c18d] Running
	I1202 20:55:13.971254  736301 system_pods.go:89] "kindnet-rzqpn" [eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb] Running
	I1202 20:55:13.971260  736301 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-997805" [04c7ea7b-9b4e-44ee-8148-107daccf6b7b] Running
	I1202 20:55:13.971266  736301 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-997805" [65af3257-fa45-4d8c-bab3-46c0390ab8df] Running
	I1202 20:55:13.971278  736301 system_pods.go:89] "kube-proxy-s2jpn" [407f6b3c-8d8b-47b0-b994-c061eedc6420] Running
	I1202 20:55:13.971283  736301 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-997805" [803a952f-54ba-4326-b70f-907fc7db368e] Running
	I1202 20:55:13.971290  736301 system_pods.go:89] "storage-provisioner" [08893b97-1192-4f4e-8636-8f2ba82c853d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:55:13.971313  736301 retry.go:31] will retry after 371.188364ms: missing components: kube-dns
	I1202 20:55:14.348015  736301 system_pods.go:86] 8 kube-system pods found
	I1202 20:55:14.348053  736301 system_pods.go:89] "coredns-66bc5c9577-jrln7" [37de7399-6357-4f08-9240-fc9e0d884f47] Running
	I1202 20:55:14.348061  736301 system_pods.go:89] "etcd-default-k8s-diff-port-997805" [446321a2-6abf-495a-8198-da2e43d8c18d] Running
	I1202 20:55:14.348080  736301 system_pods.go:89] "kindnet-rzqpn" [eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb] Running
	I1202 20:55:14.348086  736301 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-997805" [04c7ea7b-9b4e-44ee-8148-107daccf6b7b] Running
	I1202 20:55:14.348091  736301 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-997805" [65af3257-fa45-4d8c-bab3-46c0390ab8df] Running
	I1202 20:55:14.348096  736301 system_pods.go:89] "kube-proxy-s2jpn" [407f6b3c-8d8b-47b0-b994-c061eedc6420] Running
	I1202 20:55:14.348102  736301 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-997805" [803a952f-54ba-4326-b70f-907fc7db368e] Running
	I1202 20:55:14.348107  736301 system_pods.go:89] "storage-provisioner" [08893b97-1192-4f4e-8636-8f2ba82c853d] Running
	I1202 20:55:14.348118  736301 system_pods.go:126] duration metric: took 903.37182ms to wait for k8s-apps to be running ...
	I1202 20:55:14.348133  736301 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 20:55:14.348196  736301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:55:14.367188  736301 system_svc.go:56] duration metric: took 19.021039ms WaitForService to wait for kubelet
	I1202 20:55:14.367227  736301 kubeadm.go:587] duration metric: took 12.80541748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:55:14.367253  736301 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:55:14.371134  736301 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 20:55:14.371179  736301 node_conditions.go:123] node cpu capacity is 8
	I1202 20:55:14.371197  736301 node_conditions.go:105] duration metric: took 3.938624ms to run NodePressure ...
	I1202 20:55:14.371215  736301 start.go:242] waiting for startup goroutines ...
	I1202 20:55:14.371226  736301 start.go:247] waiting for cluster config update ...
	I1202 20:55:14.371254  736301 start.go:256] writing updated cluster config ...
	I1202 20:55:14.371604  736301 ssh_runner.go:195] Run: rm -f paused
	I1202 20:55:14.379210  736301 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:55:14.387240  736301 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jrln7" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:14.394276  736301 pod_ready.go:94] pod "coredns-66bc5c9577-jrln7" is "Ready"
	I1202 20:55:14.395107  736301 pod_ready.go:86] duration metric: took 7.823324ms for pod "coredns-66bc5c9577-jrln7" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:14.401910  736301 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:14.411512  736301 pod_ready.go:94] pod "etcd-default-k8s-diff-port-997805" is "Ready"
	I1202 20:55:14.411558  736301 pod_ready.go:86] duration metric: took 9.620923ms for pod "etcd-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:14.447901  736301 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:14.454179  736301 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-997805" is "Ready"
	I1202 20:55:14.454210  736301 pod_ready.go:86] duration metric: took 6.226449ms for pod "kube-apiserver-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:14.456579  736301 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:14.785098  736301 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-997805" is "Ready"
	I1202 20:55:14.785134  736301 pod_ready.go:86] duration metric: took 328.527351ms for pod "kube-controller-manager-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:14.984980  736301 pod_ready.go:83] waiting for pod "kube-proxy-s2jpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:15.384642  736301 pod_ready.go:94] pod "kube-proxy-s2jpn" is "Ready"
	I1202 20:55:15.384681  736301 pod_ready.go:86] duration metric: took 399.670012ms for pod "kube-proxy-s2jpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:15.584889  736301 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:15.984320  736301 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-997805" is "Ready"
	I1202 20:55:15.984358  736301 pod_ready.go:86] duration metric: took 399.436392ms for pod "kube-scheduler-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:15.984380  736301 pod_ready.go:40] duration metric: took 1.605130751s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:55:16.047890  736301 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 20:55:16.050340  736301 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-997805" cluster and "default" namespace by default
	I1202 20:55:17.727885  744523 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 20:55:17.727980  744523 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 20:55:17.728169  744523 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 20:55:17.728253  744523 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1202 20:55:17.728332  744523 kubeadm.go:319] OS: Linux
	I1202 20:55:17.728410  744523 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 20:55:17.728482  744523 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 20:55:17.728547  744523 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 20:55:17.728622  744523 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 20:55:17.728690  744523 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 20:55:17.728761  744523 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 20:55:17.728820  744523 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 20:55:17.728871  744523 kubeadm.go:319] CGROUPS_IO: enabled
	I1202 20:55:17.728957  744523 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 20:55:17.729110  744523 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 20:55:17.729262  744523 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 20:55:17.729355  744523 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 20:55:17.732315  744523 out.go:252]   - Generating certificates and keys ...
	I1202 20:55:17.732442  744523 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 20:55:17.732545  744523 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 20:55:17.732644  744523 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 20:55:17.732784  744523 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 20:55:17.732890  744523 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 20:55:17.732967  744523 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 20:55:17.733023  744523 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 20:55:17.733202  744523 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-245604] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1202 20:55:17.733257  744523 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 20:55:17.733428  744523 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-245604] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1202 20:55:17.733527  744523 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 20:55:17.733635  744523 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 20:55:17.733710  744523 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 20:55:17.733807  744523 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 20:55:17.733859  744523 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 20:55:17.733921  744523 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 20:55:17.734012  744523 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 20:55:17.734172  744523 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 20:55:17.734269  744523 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 20:55:17.734394  744523 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 20:55:17.734481  744523 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 20:55:17.736342  744523 out.go:252]   - Booting up control plane ...
	I1202 20:55:17.736480  744523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 20:55:17.736596  744523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 20:55:17.736693  744523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 20:55:17.736872  744523 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 20:55:17.737043  744523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 20:55:17.737243  744523 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 20:55:17.737358  744523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 20:55:17.737420  744523 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 20:55:17.737602  744523 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 20:55:17.737768  744523 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 20:55:17.737849  744523 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.139829ms
	I1202 20:55:17.737973  744523 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 20:55:17.738135  744523 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1202 20:55:17.738271  744523 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 20:55:17.738383  744523 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1202 20:55:17.738463  744523 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.006451763s
	I1202 20:55:17.738554  744523 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.25008497s
	I1202 20:55:17.738650  744523 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001931401s
	I1202 20:55:17.738800  744523 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 20:55:17.738981  744523 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 20:55:17.739041  744523 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 20:55:17.739329  744523 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-245604 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 20:55:17.739413  744523 kubeadm.go:319] [bootstrap-token] Using token: 7nkj4u.5737xh7thqz8h9m6
	I1202 20:55:17.744569  744523 out.go:252]   - Configuring RBAC rules ...
	I1202 20:55:17.744713  744523 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 20:55:17.744844  744523 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 20:55:17.745106  744523 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 20:55:17.745296  744523 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 20:55:17.745457  744523 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 20:55:17.745579  744523 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 20:55:17.745744  744523 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 20:55:17.745800  744523 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 20:55:17.745866  744523 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 20:55:17.745876  744523 kubeadm.go:319] 
	I1202 20:55:17.745958  744523 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 20:55:17.745967  744523 kubeadm.go:319] 
	I1202 20:55:17.746089  744523 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 20:55:17.746100  744523 kubeadm.go:319] 
	I1202 20:55:17.746133  744523 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 20:55:17.746211  744523 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 20:55:17.746288  744523 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 20:55:17.746297  744523 kubeadm.go:319] 
	I1202 20:55:17.746370  744523 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 20:55:17.746380  744523 kubeadm.go:319] 
	I1202 20:55:17.746447  744523 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 20:55:17.746456  744523 kubeadm.go:319] 
	I1202 20:55:17.746526  744523 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 20:55:17.746642  744523 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 20:55:17.746741  744523 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 20:55:17.746752  744523 kubeadm.go:319] 
	I1202 20:55:17.746853  744523 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 20:55:17.746921  744523 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 20:55:17.746927  744523 kubeadm.go:319] 
	I1202 20:55:17.746996  744523 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 7nkj4u.5737xh7thqz8h9m6 \
	I1202 20:55:17.747105  744523 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:48349ffa2369b87cd15480ece56edb1c5c5d97a828930f9dfbf1d1ceccf80ff4 \
	I1202 20:55:17.747126  744523 kubeadm.go:319] 	--control-plane 
	I1202 20:55:17.747130  744523 kubeadm.go:319] 
	I1202 20:55:17.747200  744523 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 20:55:17.747206  744523 kubeadm.go:319] 
	I1202 20:55:17.747278  744523 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 7nkj4u.5737xh7thqz8h9m6 \
	I1202 20:55:17.747387  744523 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:48349ffa2369b87cd15480ece56edb1c5c5d97a828930f9dfbf1d1ceccf80ff4 
	I1202 20:55:17.747398  744523 cni.go:84] Creating CNI manager for ""
	I1202 20:55:17.747411  744523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:17.749351  744523 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1202 20:55:15.046995  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	W1202 20:55:17.058635  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	I1202 20:55:17.750855  744523 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 20:55:17.756405  744523 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1202 20:55:17.756429  744523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 20:55:17.773111  744523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 20:55:18.053229  744523 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 20:55:18.053294  744523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:18.053309  744523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-245604 minikube.k8s.io/updated_at=2025_12_02T20_55_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92 minikube.k8s.io/name=newest-cni-245604 minikube.k8s.io/primary=true
	I1202 20:55:18.147980  744523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:18.174749  744523 ops.go:34] apiserver oom_adj: -16
	I1202 20:55:18.649089  744523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:19.148197  744523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:19.648300  744523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:20.148460  744523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:20.648917  744523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:21.148125  744523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:21.648136  744523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:22.148743  744523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:22.219200  744523 kubeadm.go:1114] duration metric: took 4.165964295s to wait for elevateKubeSystemPrivileges
	I1202 20:55:22.219239  744523 kubeadm.go:403] duration metric: took 13.011548887s to StartCluster
	I1202 20:55:22.219285  744523 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:22.219363  744523 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:22.220725  744523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:22.220981  744523 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:55:22.221022  744523 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:55:22.220994  744523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 20:55:22.221146  744523 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-245604"
	I1202 20:55:22.221163  744523 addons.go:70] Setting default-storageclass=true in profile "newest-cni-245604"
	I1202 20:55:22.221199  744523 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-245604"
	I1202 20:55:22.221169  744523 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-245604"
	I1202 20:55:22.221263  744523 config.go:182] Loaded profile config "newest-cni-245604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:55:22.221301  744523 host.go:66] Checking if "newest-cni-245604" exists ...
	I1202 20:55:22.221566  744523 cli_runner.go:164] Run: docker container inspect newest-cni-245604 --format={{.State.Status}}
	I1202 20:55:22.221873  744523 cli_runner.go:164] Run: docker container inspect newest-cni-245604 --format={{.State.Status}}
	I1202 20:55:22.222714  744523 out.go:179] * Verifying Kubernetes components...
	I1202 20:55:22.223874  744523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:22.247282  744523 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:55:22.248019  744523 addons.go:239] Setting addon default-storageclass=true in "newest-cni-245604"
	I1202 20:55:22.248083  744523 host.go:66] Checking if "newest-cni-245604" exists ...
	I1202 20:55:22.248524  744523 cli_runner.go:164] Run: docker container inspect newest-cni-245604 --format={{.State.Status}}
	I1202 20:55:22.248538  744523 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:22.248557  744523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:55:22.248628  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:55:22.279723  744523 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:22.279751  744523 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:55:22.279826  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:55:22.281773  744523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:55:22.304309  744523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:55:22.316682  744523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 20:55:22.364395  744523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:55:22.401050  744523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:22.421315  744523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:22.484689  744523 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1202 20:55:22.486166  744523 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:55:22.486229  744523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:55:22.792266  744523 api_server.go:72] duration metric: took 571.24928ms to wait for apiserver process to appear ...
	I1202 20:55:22.792298  744523 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:55:22.792322  744523 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 20:55:22.797807  744523 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1202 20:55:22.798870  744523 api_server.go:141] control plane version: v1.35.0-beta.0
	I1202 20:55:22.798898  744523 api_server.go:131] duration metric: took 6.592941ms to wait for apiserver health ...
	I1202 20:55:22.798907  744523 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:55:22.799764  744523 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1202 20:55:22.801176  744523 addons.go:530] duration metric: took 580.159704ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1202 20:55:22.802447  744523 system_pods.go:59] 8 kube-system pods found
	I1202 20:55:22.802480  744523 system_pods.go:61] "coredns-7d764666f9-blfz2" [431846c2-b261-4ac9-ae34-f5e7c9bd7c30] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1202 20:55:22.802489  744523 system_pods.go:61] "etcd-newest-cni-245604" [0153ab66-c89e-4cb9-956f-af095ae01a6d] Running
	I1202 20:55:22.802501  744523 system_pods.go:61] "kindnet-flbpz" [5931b461-203e-4906-9cb7-0a7ddcf9c5ae] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 20:55:22.802512  744523 system_pods.go:61] "kube-apiserver-newest-cni-245604" [aedbda6a-d95b-4616-9c31-4931593df7d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:55:22.802521  744523 system_pods.go:61] "kube-controller-manager-newest-cni-245604" [f659dbd1-c031-4078-a1e3-e75ac74f2ea4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:55:22.802530  744523 system_pods.go:61] "kube-proxy-khm6s" [990486ba-3da5-4666-b441-52e3fcc4c81f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 20:55:22.802536  744523 system_pods.go:61] "kube-scheduler-newest-cni-245604" [652fff1e-9b61-4947-a077-8f039064ad96] Running
	I1202 20:55:22.802551  744523 system_pods.go:61] "storage-provisioner" [6eb8872b-114f-434c-b0ca-a8eaa4c5da9e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1202 20:55:22.802561  744523 system_pods.go:74] duration metric: took 3.647122ms to wait for pod list to return data ...
	I1202 20:55:22.802576  744523 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:55:22.805280  744523 default_sa.go:45] found service account: "default"
	I1202 20:55:22.805300  744523 default_sa.go:55] duration metric: took 2.718099ms for default service account to be created ...
	I1202 20:55:22.805312  744523 kubeadm.go:587] duration metric: took 584.301651ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 20:55:22.805329  744523 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:55:22.807801  744523 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 20:55:22.807824  744523 node_conditions.go:123] node cpu capacity is 8
	I1202 20:55:22.807840  744523 node_conditions.go:105] duration metric: took 2.507559ms to run NodePressure ...
	I1202 20:55:22.807853  744523 start.go:242] waiting for startup goroutines ...
	I1202 20:55:22.989790  744523 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-245604" context rescaled to 1 replicas
	I1202 20:55:22.989832  744523 start.go:247] waiting for cluster config update ...
	I1202 20:55:22.989848  744523 start.go:256] writing updated cluster config ...
	I1202 20:55:22.990236  744523 ssh_runner.go:195] Run: rm -f paused
	I1202 20:55:23.046430  744523 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1202 20:55:23.049142  744523 out.go:179] * Done! kubectl is now configured to use "newest-cni-245604" cluster and "default" namespace by default
	W1202 20:55:19.545822  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	W1202 20:55:22.046695  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 02 20:55:12 newest-cni-245604 crio[764]: time="2025-12-02T20:55:12.827570718Z" level=info msg="Started container" PID=2115 containerID=3f44038bd9e37157c6be418d31b6bc76a04596cbea7ae4af18a90579ffa5bc11 description=kube-system/kube-controller-manager-newest-cni-245604/kube-controller-manager id=83581e65-f742-4bfd-801b-fe06b5c59732 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4d889ea9dc9c583e3ff037223075a2114047a99be09d3777099639ebcef79c11
	Dec 02 20:55:22 newest-cni-245604 crio[764]: time="2025-12-02T20:55:22.860997318Z" level=info msg="Running pod sandbox: kube-system/kindnet-flbpz/POD" id=3fcf97e8-84ab-489e-bb7b-fb8510a997f1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 20:55:22 newest-cni-245604 crio[764]: time="2025-12-02T20:55:22.861140455Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:22 newest-cni-245604 crio[764]: time="2025-12-02T20:55:22.862490619Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-khm6s/POD" id=4c9d58d6-d4f4-4f74-a549-03f3ae2d365b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 20:55:22 newest-cni-245604 crio[764]: time="2025-12-02T20:55:22.862544871Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:22 newest-cni-245604 crio[764]: time="2025-12-02T20:55:22.865620348Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=3fcf97e8-84ab-489e-bb7b-fb8510a997f1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 20:55:22 newest-cni-245604 crio[764]: time="2025-12-02T20:55:22.866190486Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=4c9d58d6-d4f4-4f74-a549-03f3ae2d365b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 20:55:22 newest-cni-245604 crio[764]: time="2025-12-02T20:55:22.867362533Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 20:55:22 newest-cni-245604 crio[764]: time="2025-12-02T20:55:22.867924728Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 20:55:22 newest-cni-245604 crio[764]: time="2025-12-02T20:55:22.868135662Z" level=info msg="Ran pod sandbox 95df31cefef3669df58fcafaf98c6082d5cbfc2045b7b61242a10aa767ade00f with infra container: kube-system/kindnet-flbpz/POD" id=3fcf97e8-84ab-489e-bb7b-fb8510a997f1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 20:55:22 newest-cni-245604 crio[764]: time="2025-12-02T20:55:22.868691005Z" level=info msg="Ran pod sandbox 5a2195a76f131c6c2d8656e2ea7ef218b5d93494171eac5053d2d406001b047f with infra container: kube-system/kube-proxy-khm6s/POD" id=4c9d58d6-d4f4-4f74-a549-03f3ae2d365b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 20:55:22 newest-cni-245604 crio[764]: time="2025-12-02T20:55:22.869341749Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=8b526eff-07ac-40dd-a5bd-3623e7d96144 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:22 newest-cni-245604 crio[764]: time="2025-12-02T20:55:22.86948164Z" level=info msg="Image docker.io/kindest/kindnetd:v20250512-df8de77b not found" id=8b526eff-07ac-40dd-a5bd-3623e7d96144 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:22 newest-cni-245604 crio[764]: time="2025-12-02T20:55:22.869532895Z" level=info msg="Neither image nor artfiact docker.io/kindest/kindnetd:v20250512-df8de77b found" id=8b526eff-07ac-40dd-a5bd-3623e7d96144 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:22 newest-cni-245604 crio[764]: time="2025-12-02T20:55:22.869661516Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=91b4af39-cc04-4fd3-a68b-83e689ee8828 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:22 newest-cni-245604 crio[764]: time="2025-12-02T20:55:22.870529061Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250512-df8de77b" id=08f25bc1-9cc9-4fd6-b404-d62bd5f1112c name=/runtime.v1.ImageService/PullImage
	Dec 02 20:55:22 newest-cni-245604 crio[764]: time="2025-12-02T20:55:22.870715323Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=c2c4f542-5bb3-4824-a6d5-518958d9794d name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:22 newest-cni-245604 crio[764]: time="2025-12-02T20:55:22.873417132Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250512-df8de77b\""
	Dec 02 20:55:22 newest-cni-245604 crio[764]: time="2025-12-02T20:55:22.875189497Z" level=info msg="Creating container: kube-system/kube-proxy-khm6s/kube-proxy" id=d5c63e3b-3c7e-4cad-8d58-4e947f8ac6f6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:55:22 newest-cni-245604 crio[764]: time="2025-12-02T20:55:22.875846803Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:22 newest-cni-245604 crio[764]: time="2025-12-02T20:55:22.881299436Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:22 newest-cni-245604 crio[764]: time="2025-12-02T20:55:22.882183471Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:22 newest-cni-245604 crio[764]: time="2025-12-02T20:55:22.917364837Z" level=info msg="Created container e472472064434eda4ff7c8e25593088d833b8a6207e29032822974f7baeb60b0: kube-system/kube-proxy-khm6s/kube-proxy" id=d5c63e3b-3c7e-4cad-8d58-4e947f8ac6f6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:55:22 newest-cni-245604 crio[764]: time="2025-12-02T20:55:22.918184634Z" level=info msg="Starting container: e472472064434eda4ff7c8e25593088d833b8a6207e29032822974f7baeb60b0" id=34fbd6dd-6018-4848-b794-2f58783d6e7b name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:55:22 newest-cni-245604 crio[764]: time="2025-12-02T20:55:22.921210498Z" level=info msg="Started container" PID=2482 containerID=e472472064434eda4ff7c8e25593088d833b8a6207e29032822974f7baeb60b0 description=kube-system/kube-proxy-khm6s/kube-proxy id=34fbd6dd-6018-4848-b794-2f58783d6e7b name=/runtime.v1.RuntimeService/StartContainer sandboxID=5a2195a76f131c6c2d8656e2ea7ef218b5d93494171eac5053d2d406001b047f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e472472064434       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   1 second ago        Running             kube-proxy                0                   5a2195a76f131       kube-proxy-khm6s                            kube-system
	3f44038bd9e37       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   11 seconds ago      Running             kube-controller-manager   0                   4d889ea9dc9c5       kube-controller-manager-newest-cni-245604   kube-system
	2f2baa1465b81       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   11 seconds ago      Running             kube-apiserver            0                   684a0a445a9ea       kube-apiserver-newest-cni-245604            kube-system
	58cdf7e8c6bc8       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   11 seconds ago      Running             kube-scheduler            0                   7a2dd0ad24c3d       kube-scheduler-newest-cni-245604            kube-system
	225a011f2829d       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   11 seconds ago      Running             etcd                      0                   d7defe649f4d0       etcd-newest-cni-245604                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-245604
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-245604
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=newest-cni-245604
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T20_55_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 20:55:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-245604
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:55:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:55:17 +0000   Tue, 02 Dec 2025 20:55:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:55:17 +0000   Tue, 02 Dec 2025 20:55:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:55:17 +0000   Tue, 02 Dec 2025 20:55:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 02 Dec 2025 20:55:17 +0000   Tue, 02 Dec 2025 20:55:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-245604
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                db92b9bd-a8ee-4a01-993b-03f9f3976205
	  Boot ID:                    9dd5456f-e394-4b4b-9458-48d26faf507e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-245604                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7s
	  kube-system                 kindnet-flbpz                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-245604             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-245604    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-khm6s                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-245604             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-245604 event: Registered Node newest-cni-245604 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[ +32.254238] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[Dec 2 20:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 03 bd 14 45 8a 08 06
	[  +0.000590] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 96 27 ad 0d 40 04 08 06
	[Dec 2 20:53] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	[  +0.000700] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 e4 ba c0 78 5f 08 06
	[ +10.119645] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000022] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[  +2.447166] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 df 09 53 d6 6e 08 06
	[  +0.000374] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 8d 06 71 0a 5e 08 06
	[Dec 2 20:54] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 12 47 13 50 f6 bc 08 06
	[  +0.001523] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[ +22.123549] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 0d 45 06 42 2a 08 06
	[  +0.000389] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	
	
	==> etcd [225a011f2829dd0b2db7f7745b80f2ec023b9345bcc78be42aa9179373330841] <==
	{"level":"warn","ts":"2025-12-02T20:55:13.675825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:13.684318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:13.693919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:13.704355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:13.713211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:13.723522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:13.732204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:13.741851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:13.759374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:13.769405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:13.777890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:13.786547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:13.796288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:13.805839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:13.817972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:13.831650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:13.839744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:13.847429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:13.859560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:13.874449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:13.882752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:13.893150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:13.902170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:13.973449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43458","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T20:55:17.646715Z","caller":"traceutil/trace.go:172","msg":"trace[251768228] transaction","detail":"{read_only:false; response_revision:295; number_of_response:1; }","duration":"118.932249ms","start":"2025-12-02T20:55:17.527766Z","end":"2025-12-02T20:55:17.646698Z","steps":["trace[251768228] 'process raft request'  (duration: 118.630908ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:55:24 up  2:37,  0 user,  load average: 5.96, 4.16, 2.63
	Linux newest-cni-245604 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [2f2baa1465b81fdd98b6bcbf71ad9a1e5dc859eda9b5654ac98f8db4d19267b6] <==
	I1202 20:55:14.590936       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 20:55:14.591840       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1202 20:55:14.591859       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 20:55:14.592195       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:14.599377       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1202 20:55:14.599375       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 20:55:14.600054       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:55:14.606143       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:55:15.494857       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1202 20:55:15.500145       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1202 20:55:15.500234       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1202 20:55:16.206710       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 20:55:16.257515       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 20:55:16.400413       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1202 20:55:16.410016       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1202 20:55:16.411563       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 20:55:16.416500       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 20:55:16.532607       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 20:55:17.142107       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 20:55:17.290249       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1202 20:55:17.300043       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 20:55:22.033168       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 20:55:22.336998       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:55:22.343366       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:55:22.533256       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [3f44038bd9e37157c6be418d31b6bc76a04596cbea7ae4af18a90579ffa5bc11] <==
	I1202 20:55:21.337050       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:21.337251       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:21.337513       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:21.337552       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:21.337780       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:21.337804       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:21.337835       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:21.337851       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:21.337908       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:21.337951       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:21.338035       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1202 20:55:21.337940       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:21.338161       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-245604"
	I1202 20:55:21.338206       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1202 20:55:21.337972       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:21.337929       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:21.337927       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:21.337952       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:21.341864       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 20:55:21.346225       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-245604" podCIDRs=["10.42.0.0/24"]
	I1202 20:55:21.349549       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:21.438408       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:21.438446       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1202 20:55:21.438451       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1202 20:55:21.442654       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [e472472064434eda4ff7c8e25593088d833b8a6207e29032822974f7baeb60b0] <==
	I1202 20:55:22.957656       1 server_linux.go:53] "Using iptables proxy"
	I1202 20:55:23.033801       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 20:55:23.134896       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:23.134950       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1202 20:55:23.135054       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 20:55:23.201546       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 20:55:23.201602       1 server_linux.go:136] "Using iptables Proxier"
	I1202 20:55:23.207251       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 20:55:23.207635       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 20:55:23.207656       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:55:23.209006       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 20:55:23.209024       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 20:55:23.209091       1 config.go:200] "Starting service config controller"
	I1202 20:55:23.209098       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 20:55:23.209095       1 config.go:309] "Starting node config controller"
	I1202 20:55:23.209113       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 20:55:23.209118       1 config.go:106] "Starting endpoint slice config controller"
	I1202 20:55:23.209122       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 20:55:23.209122       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 20:55:23.310086       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 20:55:23.310107       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 20:55:23.310141       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [58cdf7e8c6bc83acf5a709cfb34d8c79fbaa94a82af09970e2adb370b93027b8] <==
	E1202 20:55:15.502646       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1202 20:55:15.503976       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1202 20:55:15.519734       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 20:55:15.520973       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1202 20:55:15.569120       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 20:55:15.570272       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1202 20:55:15.572410       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1202 20:55:15.573529       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1202 20:55:15.596150       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1202 20:55:15.597515       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1202 20:55:15.612978       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1202 20:55:15.614198       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1202 20:55:15.623524       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1202 20:55:15.624753       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1202 20:55:15.681849       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1202 20:55:15.683137       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1202 20:55:15.740382       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 20:55:15.741541       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1202 20:55:15.826105       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1202 20:55:15.827310       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1202 20:55:15.849844       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 20:55:15.851006       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1202 20:55:15.851330       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1202 20:55:15.853350       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1202 20:55:17.955318       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 02 20:55:18 newest-cni-245604 kubelet[2193]: E1202 20:55:18.094314    2193 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-245604" containerName="kube-apiserver"
	Dec 02 20:55:18 newest-cni-245604 kubelet[2193]: I1202 20:55:18.133025    2193 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-245604" podStartSLOduration=1.132992797 podStartE2EDuration="1.132992797s" podCreationTimestamp="2025-12-02 20:55:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:55:18.117214495 +0000 UTC m=+1.183531544" watchObservedRunningTime="2025-12-02 20:55:18.132992797 +0000 UTC m=+1.199309857"
	Dec 02 20:55:18 newest-cni-245604 kubelet[2193]: I1202 20:55:18.150185    2193 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-245604" podStartSLOduration=1.150131686 podStartE2EDuration="1.150131686s" podCreationTimestamp="2025-12-02 20:55:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:55:18.134311237 +0000 UTC m=+1.200628285" watchObservedRunningTime="2025-12-02 20:55:18.150131686 +0000 UTC m=+1.216448735"
	Dec 02 20:55:18 newest-cni-245604 kubelet[2193]: I1202 20:55:18.171818    2193 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-245604" podStartSLOduration=2.171796456 podStartE2EDuration="2.171796456s" podCreationTimestamp="2025-12-02 20:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:55:18.171179168 +0000 UTC m=+1.237496217" watchObservedRunningTime="2025-12-02 20:55:18.171796456 +0000 UTC m=+1.238113502"
	Dec 02 20:55:18 newest-cni-245604 kubelet[2193]: I1202 20:55:18.171986    2193 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-245604" podStartSLOduration=1.171976363 podStartE2EDuration="1.171976363s" podCreationTimestamp="2025-12-02 20:55:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:55:18.151049774 +0000 UTC m=+1.217366816" watchObservedRunningTime="2025-12-02 20:55:18.171976363 +0000 UTC m=+1.238293403"
	Dec 02 20:55:19 newest-cni-245604 kubelet[2193]: E1202 20:55:19.083459    2193 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-245604" containerName="kube-controller-manager"
	Dec 02 20:55:19 newest-cni-245604 kubelet[2193]: E1202 20:55:19.083546    2193 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-245604" containerName="kube-apiserver"
	Dec 02 20:55:19 newest-cni-245604 kubelet[2193]: E1202 20:55:19.083714    2193 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-245604" containerName="etcd"
	Dec 02 20:55:19 newest-cni-245604 kubelet[2193]: E1202 20:55:19.083927    2193 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-245604" containerName="kube-scheduler"
	Dec 02 20:55:20 newest-cni-245604 kubelet[2193]: E1202 20:55:20.084894    2193 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-245604" containerName="kube-scheduler"
	Dec 02 20:55:21 newest-cni-245604 kubelet[2193]: E1202 20:55:21.086443    2193 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-245604" containerName="kube-scheduler"
	Dec 02 20:55:21 newest-cni-245604 kubelet[2193]: I1202 20:55:21.431335    2193 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 02 20:55:21 newest-cni-245604 kubelet[2193]: I1202 20:55:21.431982    2193 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 02 20:55:21 newest-cni-245604 kubelet[2193]: E1202 20:55:21.682905    2193 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-245604" containerName="kube-apiserver"
	Dec 02 20:55:22 newest-cni-245604 kubelet[2193]: I1202 20:55:22.574490    2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5931b461-203e-4906-9cb7-0a7ddcf9c5ae-xtables-lock\") pod \"kindnet-flbpz\" (UID: \"5931b461-203e-4906-9cb7-0a7ddcf9c5ae\") " pod="kube-system/kindnet-flbpz"
	Dec 02 20:55:22 newest-cni-245604 kubelet[2193]: I1202 20:55:22.574552    2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btzdn\" (UniqueName: \"kubernetes.io/projected/5931b461-203e-4906-9cb7-0a7ddcf9c5ae-kube-api-access-btzdn\") pod \"kindnet-flbpz\" (UID: \"5931b461-203e-4906-9cb7-0a7ddcf9c5ae\") " pod="kube-system/kindnet-flbpz"
	Dec 02 20:55:22 newest-cni-245604 kubelet[2193]: I1202 20:55:22.574584    2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5931b461-203e-4906-9cb7-0a7ddcf9c5ae-lib-modules\") pod \"kindnet-flbpz\" (UID: \"5931b461-203e-4906-9cb7-0a7ddcf9c5ae\") " pod="kube-system/kindnet-flbpz"
	Dec 02 20:55:22 newest-cni-245604 kubelet[2193]: I1202 20:55:22.574605    2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/990486ba-3da5-4666-b441-52e3fcc4c81f-kube-proxy\") pod \"kube-proxy-khm6s\" (UID: \"990486ba-3da5-4666-b441-52e3fcc4c81f\") " pod="kube-system/kube-proxy-khm6s"
	Dec 02 20:55:22 newest-cni-245604 kubelet[2193]: I1202 20:55:22.574626    2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fnv8\" (UniqueName: \"kubernetes.io/projected/990486ba-3da5-4666-b441-52e3fcc4c81f-kube-api-access-7fnv8\") pod \"kube-proxy-khm6s\" (UID: \"990486ba-3da5-4666-b441-52e3fcc4c81f\") " pod="kube-system/kube-proxy-khm6s"
	Dec 02 20:55:22 newest-cni-245604 kubelet[2193]: I1202 20:55:22.574649    2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/990486ba-3da5-4666-b441-52e3fcc4c81f-xtables-lock\") pod \"kube-proxy-khm6s\" (UID: \"990486ba-3da5-4666-b441-52e3fcc4c81f\") " pod="kube-system/kube-proxy-khm6s"
	Dec 02 20:55:22 newest-cni-245604 kubelet[2193]: I1202 20:55:22.574670    2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/990486ba-3da5-4666-b441-52e3fcc4c81f-lib-modules\") pod \"kube-proxy-khm6s\" (UID: \"990486ba-3da5-4666-b441-52e3fcc4c81f\") " pod="kube-system/kube-proxy-khm6s"
	Dec 02 20:55:22 newest-cni-245604 kubelet[2193]: I1202 20:55:22.574697    2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5931b461-203e-4906-9cb7-0a7ddcf9c5ae-cni-cfg\") pod \"kindnet-flbpz\" (UID: \"5931b461-203e-4906-9cb7-0a7ddcf9c5ae\") " pod="kube-system/kindnet-flbpz"
	Dec 02 20:55:23 newest-cni-245604 kubelet[2193]: E1202 20:55:23.693300    2193 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-245604" containerName="kube-scheduler"
	Dec 02 20:55:23 newest-cni-245604 kubelet[2193]: I1202 20:55:23.707041    2193 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-khm6s" podStartSLOduration=1.707015258 podStartE2EDuration="1.707015258s" podCreationTimestamp="2025-12-02 20:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:55:23.10664313 +0000 UTC m=+6.172960180" watchObservedRunningTime="2025-12-02 20:55:23.707015258 +0000 UTC m=+6.773332304"
	Dec 02 20:55:24 newest-cni-245604 kubelet[2193]: E1202 20:55:24.647830    2193 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-245604" containerName="etcd"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-245604 -n newest-cni-245604
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-245604 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-blfz2 kindnet-flbpz storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-245604 describe pod coredns-7d764666f9-blfz2 kindnet-flbpz storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-245604 describe pod coredns-7d764666f9-blfz2 kindnet-flbpz storage-provisioner: exit status 1 (63.786611ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-blfz2" not found
	Error from server (NotFound): pods "kindnet-flbpz" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-245604 describe pod coredns-7d764666f9-blfz2 kindnet-flbpz storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-997805 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-997805 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (320.757748ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:55:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-997805 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-997805 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-997805 describe deploy/metrics-server -n kube-system: exit status 1 (80.267948ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-997805 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-997805
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-997805:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c25b25f1d6428e1d0325a0468af68a3a621d93f89c284fc809071a0c5b3636f1",
	        "Created": "2025-12-02T20:54:37.048348832Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 737743,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T20:54:37.092012237Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/c25b25f1d6428e1d0325a0468af68a3a621d93f89c284fc809071a0c5b3636f1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c25b25f1d6428e1d0325a0468af68a3a621d93f89c284fc809071a0c5b3636f1/hostname",
	        "HostsPath": "/var/lib/docker/containers/c25b25f1d6428e1d0325a0468af68a3a621d93f89c284fc809071a0c5b3636f1/hosts",
	        "LogPath": "/var/lib/docker/containers/c25b25f1d6428e1d0325a0468af68a3a621d93f89c284fc809071a0c5b3636f1/c25b25f1d6428e1d0325a0468af68a3a621d93f89c284fc809071a0c5b3636f1-json.log",
	        "Name": "/default-k8s-diff-port-997805",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-997805:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-997805",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c25b25f1d6428e1d0325a0468af68a3a621d93f89c284fc809071a0c5b3636f1",
	                "LowerDir": "/var/lib/docker/overlay2/438615afda3ee0db74f277419380adcb83f92340686904c8b7104d5c82409f9b-init/diff:/var/lib/docker/overlay2/49bb44e987885c48600d5ae7ad3c81cad82a00f38070c0460882c5746d9fae59/diff",
	                "MergedDir": "/var/lib/docker/overlay2/438615afda3ee0db74f277419380adcb83f92340686904c8b7104d5c82409f9b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/438615afda3ee0db74f277419380adcb83f92340686904c8b7104d5c82409f9b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/438615afda3ee0db74f277419380adcb83f92340686904c8b7104d5c82409f9b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-997805",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-997805/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-997805",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-997805",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-997805",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "992bacc9aebe59906dcc75e8688140671c01cc7068847f36affb874581441bc5",
	            "SandboxKey": "/var/run/docker/netns/992bacc9aebe",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33483"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33484"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33487"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33485"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33486"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-997805": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "13fe483902b92417fb08b9a25307f2df4dbcc897dff65b84bbef9f2f680f60c8",
	                    "EndpointID": "674ff58d5a2a347a6363252681d74032379f0624a9172145cf0bdf922a187ff8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "c2:f1:17:32:68:ee",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-997805",
	                        "c25b25f1d642"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-997805 -n default-k8s-diff-port-997805
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-997805 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-997805 logs -n 25: (1.161662234s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ ssh     │ -p bridge-775392 sudo systemctl cat docker --no-pager                                                                                                                                                                                                │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                    │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ ssh     │ -p bridge-775392 sudo docker system info                                                                                                                                                                                                             │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ ssh     │ -p bridge-775392 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                            │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ ssh     │ -p bridge-775392 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                            │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                       │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ ssh     │ -p bridge-775392 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                 │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo cri-dockerd --version                                                                                                                                                                                                          │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                            │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ ssh     │ -p bridge-775392 sudo systemctl cat containerd --no-pager                                                                                                                                                                                            │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                     │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo cat /etc/containerd/config.toml                                                                                                                                                                                                │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo containerd config dump                                                                                                                                                                                                         │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                  │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo systemctl cat crio --no-pager                                                                                                                                                                                                  │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                        │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo crio config                                                                                                                                                                                                                    │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-992336 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ delete  │ -p bridge-775392                                                                                                                                                                                                                                     │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ start   │ -p old-k8s-version-992336 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ start   │ -p newest-cni-245604 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable metrics-server -p no-preload-336331 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ stop    │ -p no-preload-336331 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-245604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-997805 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:54:51
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:54:51.248686  744523 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:54:51.248931  744523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:54:51.248939  744523 out.go:374] Setting ErrFile to fd 2...
	I1202 20:54:51.248944  744523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:54:51.249199  744523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:54:51.249701  744523 out.go:368] Setting JSON to false
	I1202 20:54:51.250904  744523 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9435,"bootTime":1764699456,"procs":364,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:54:51.250979  744523 start.go:143] virtualization: kvm guest
	I1202 20:54:51.252790  744523 out.go:179] * [newest-cni-245604] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 20:54:51.253899  744523 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:54:51.253977  744523 notify.go:221] Checking for updates...
	I1202 20:54:51.255724  744523 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:54:51.257813  744523 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:54:51.259113  744523 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 20:54:51.260359  744523 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:54:51.261736  744523 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:54:51.263851  744523 config.go:182] Loaded profile config "default-k8s-diff-port-997805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:54:51.264036  744523 config.go:182] Loaded profile config "no-preload-336331": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:54:51.264195  744523 config.go:182] Loaded profile config "old-k8s-version-992336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1202 20:54:51.264328  744523 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:54:51.291120  744523 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 20:54:51.291259  744523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:54:51.351993  744523 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 20:54:51.3414757 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:54:51.352148  744523 docker.go:319] overlay module found
	I1202 20:54:51.354258  744523 out.go:179] * Using the docker driver based on user configuration
	I1202 20:54:51.355593  744523 start.go:309] selected driver: docker
	I1202 20:54:51.355614  744523 start.go:927] validating driver "docker" against <nil>
	I1202 20:54:51.355627  744523 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:54:51.356356  744523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:54:51.426417  744523 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 20:54:51.413315172 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:54:51.426660  744523 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1202 20:54:51.426715  744523 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1202 20:54:51.427099  744523 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 20:54:51.430750  744523 out.go:179] * Using Docker driver with root privileges
	I1202 20:54:51.432181  744523 cni.go:84] Creating CNI manager for ""
	I1202 20:54:51.432273  744523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:54:51.432289  744523 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 20:54:51.432396  744523 start.go:353] cluster config:
	{Name:newest-cni-245604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-245604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:54:51.433991  744523 out.go:179] * Starting "newest-cni-245604" primary control-plane node in "newest-cni-245604" cluster
	I1202 20:54:51.435712  744523 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 20:54:51.437418  744523 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 20:54:51.438923  744523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 20:54:51.439029  744523 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 20:54:51.471094  744523 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 20:54:51.471120  744523 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 20:54:51.534888  744523 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1202 20:54:51.754467  744523 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1202 20:54:51.754662  744523 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/config.json ...
	I1202 20:54:51.754711  744523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/config.json: {Name:mkdd178ed72e91eb36b68a6cb223fd44f9a5dcff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:54:51.754782  744523 cache.go:107] acquiring lock: {Name:mkf03491d08646dc0a2273e6c20a49756d4e1761 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.754824  744523 cache.go:107] acquiring lock: {Name:mk4453b54b86b3689d0543734fa82feede2f4f33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.754826  744523 cache.go:107] acquiring lock: {Name:mk8c99492104b5abf1d260aa0432b08c059c9259 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.754883  744523 cache.go:107] acquiring lock: {Name:mk5eb5d2ea906db41607942a8f8093a266b381cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.754913  744523 cache.go:107] acquiring lock: {Name:mkda13332b8e3f844bd42c29502a9c7671b1ad3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.754935  744523 cache.go:243] Successfully downloaded all kic artifacts
	I1202 20:54:51.754899  744523 cache.go:107] acquiring lock: {Name:mk01b60fbf34196e8795139c06a53061b5bbef1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.754947  744523 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 20:54:51.754967  744523 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 20:54:51.754900  744523 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 20:54:51.754974  744523 start.go:360] acquireMachinesLock for newest-cni-245604: {Name:mk8ec8505d24ccef2b962d884ea41e40436fd883 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.754980  744523 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 20:54:51.754981  744523 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 69.251µs
	I1202 20:54:51.754990  744523 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 242.138µs
	I1202 20:54:51.754996  744523 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 20:54:51.755004  744523 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 20:54:51.755001  744523 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 20:54:51.754963  744523 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 141.678µs
	I1202 20:54:51.755018  744523 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 20:54:51.754982  744523 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 158.842µs
	I1202 20:54:51.755022  744523 start.go:364] duration metric: took 35.783µs to acquireMachinesLock for "newest-cni-245604"
	I1202 20:54:51.754970  744523 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 20:54:51.755028  744523 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 147.229µs
	I1202 20:54:51.755038  744523 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 20:54:51.755036  744523 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 141.032µs
	I1202 20:54:51.755051  744523 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 20:54:51.754782  744523 cache.go:107] acquiring lock: {Name:mk911a7415c1db6121866a16aaa8d547d8fc27e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.755025  744523 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 20:54:51.754791  744523 cache.go:107] acquiring lock: {Name:mk1ce3ec6c8a0a78faf5ccb0bb487dc5a506ffff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:54:51.755107  744523 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 20:54:51.755130  744523 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 351.859µs
	I1202 20:54:51.755151  744523 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 20:54:51.755051  744523 start.go:93] Provisioning new machine with config: &{Name:newest-cni-245604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-245604 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:54:51.755192  744523 start.go:125] createHost starting for "" (driver="docker")
	I1202 20:54:51.755295  744523 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1202 20:54:51.755311  744523 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 531.706µs
	I1202 20:54:51.755333  744523 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 20:54:51.755341  744523 cache.go:87] Successfully saved all images to host disk.
	I1202 20:54:49.807275  736301 out.go:252]   - Booting up control plane ...
	I1202 20:54:49.807399  736301 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 20:54:49.807498  736301 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 20:54:49.807593  736301 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 20:54:49.820733  736301 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 20:54:49.820866  736301 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 20:54:49.828232  736301 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 20:54:49.829367  736301 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 20:54:49.829419  736301 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 20:54:49.939090  736301 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 20:54:49.939273  736301 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 20:54:50.939981  736301 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00098678s
	I1202 20:54:50.943942  736301 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 20:54:50.944097  736301 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1202 20:54:50.944200  736301 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 20:54:50.944356  736301 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1202 20:54:48.889199  727677 node_ready.go:57] node "no-preload-336331" has "Ready":"False" status (will retry)
	W1202 20:54:50.889639  727677 node_ready.go:57] node "no-preload-336331" has "Ready":"False" status (will retry)
	W1202 20:54:52.890301  727677 node_ready.go:57] node "no-preload-336331" has "Ready":"False" status (will retry)
	I1202 20:54:48.917338  743547 out.go:252] * Restarting existing docker container for "old-k8s-version-992336" ...
	I1202 20:54:48.917418  743547 cli_runner.go:164] Run: docker start old-k8s-version-992336
	I1202 20:54:49.233874  743547 cli_runner.go:164] Run: docker container inspect old-k8s-version-992336 --format={{.State.Status}}
	I1202 20:54:49.254208  743547 kic.go:430] container "old-k8s-version-992336" state is running.
	I1202 20:54:49.254576  743547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-992336
	I1202 20:54:49.276197  743547 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336/config.json ...
	I1202 20:54:49.276474  743547 machine.go:94] provisionDockerMachine start ...
	I1202 20:54:49.276556  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:49.295873  743547 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:49.296238  743547 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1202 20:54:49.296255  743547 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:54:49.296917  743547 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36762->127.0.0.1:33488: read: connection reset by peer
	I1202 20:54:52.482289  743547 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-992336
	
	I1202 20:54:52.482326  743547 ubuntu.go:182] provisioning hostname "old-k8s-version-992336"
	I1202 20:54:52.482403  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:52.508620  743547 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:52.509026  743547 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1202 20:54:52.509045  743547 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-992336 && echo "old-k8s-version-992336" | sudo tee /etc/hostname
	I1202 20:54:52.680116  743547 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-992336
	
	I1202 20:54:52.680210  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:52.706295  743547 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:52.706638  743547 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1202 20:54:52.706666  743547 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-992336' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-992336/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-992336' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:54:52.868164  743547 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:54:52.868203  743547 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-407427/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-407427/.minikube}
	I1202 20:54:52.868253  743547 ubuntu.go:190] setting up certificates
	I1202 20:54:52.868266  743547 provision.go:84] configureAuth start
	I1202 20:54:52.868351  743547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-992336
	I1202 20:54:52.896120  743547 provision.go:143] copyHostCerts
	I1202 20:54:52.896189  743547 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem, removing ...
	I1202 20:54:52.896201  743547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem
	I1202 20:54:52.896288  743547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem (1082 bytes)
	I1202 20:54:52.896403  743547 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem, removing ...
	I1202 20:54:52.896415  743547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem
	I1202 20:54:52.896450  743547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem (1123 bytes)
	I1202 20:54:52.896523  743547 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem, removing ...
	I1202 20:54:52.896534  743547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem
	I1202 20:54:52.896565  743547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem (1675 bytes)
	I1202 20:54:52.896627  743547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-992336 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-992336]
	I1202 20:54:53.042224  743547 provision.go:177] copyRemoteCerts
	I1202 20:54:53.042352  743547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:54:53.042421  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:53.066302  743547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/old-k8s-version-992336/id_rsa Username:docker}
	I1202 20:54:53.180785  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:54:53.215027  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1202 20:54:53.249137  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 20:54:53.276327  743547 provision.go:87] duration metric: took 408.04457ms to configureAuth
	I1202 20:54:53.276364  743547 ubuntu.go:206] setting minikube options for container-runtime
	I1202 20:54:53.276661  743547 config.go:182] Loaded profile config "old-k8s-version-992336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1202 20:54:53.276881  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:53.305450  743547 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:53.305788  743547 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1202 20:54:53.305819  743547 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:54:53.745248  743547 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:54:53.745280  743547 machine.go:97] duration metric: took 4.468788993s to provisionDockerMachine
	I1202 20:54:53.745299  743547 start.go:293] postStartSetup for "old-k8s-version-992336" (driver="docker")
	I1202 20:54:53.745313  743547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:54:53.745402  743547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:54:53.745451  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:53.773838  743547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/old-k8s-version-992336/id_rsa Username:docker}
	I1202 20:54:53.877082  743547 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:54:53.881285  743547 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:54:53.881316  743547 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:54:53.881332  743547 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 20:54:53.881412  743547 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 20:54:53.881515  743547 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem -> 4110322.pem in /etc/ssl/certs
	I1202 20:54:53.881673  743547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:54:53.890517  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:54:53.911353  743547 start.go:296] duration metric: took 166.0361ms for postStartSetup
	I1202 20:54:53.911460  743547 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:54:53.911513  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:53.934180  743547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/old-k8s-version-992336/id_rsa Username:docker}
	I1202 20:54:54.034877  743547 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:54:54.040410  743547 fix.go:56] duration metric: took 5.146736871s for fixHost
	I1202 20:54:54.040443  743547 start.go:83] releasing machines lock for "old-k8s-version-992336", held for 5.146795457s
	I1202 20:54:54.040529  743547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-992336
	I1202 20:54:54.060426  743547 ssh_runner.go:195] Run: cat /version.json
	I1202 20:54:54.060485  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:54.060496  743547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:54:54.060573  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:54.082901  743547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/old-k8s-version-992336/id_rsa Username:docker}
	I1202 20:54:54.082948  743547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/old-k8s-version-992336/id_rsa Username:docker}
	I1202 20:54:54.182659  743547 ssh_runner.go:195] Run: systemctl --version
	I1202 20:54:54.241255  743547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:54:54.279690  743547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:54:54.284969  743547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:54:54.285109  743547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:54:54.294313  743547 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 20:54:54.294343  743547 start.go:496] detecting cgroup driver to use...
	I1202 20:54:54.294378  743547 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 20:54:54.294431  743547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:54:54.311476  743547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:54:54.325741  743547 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:54:54.325809  743547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:54:54.342382  743547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:54:54.356905  743547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:54:54.449514  743547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:54:54.540100  743547 docker.go:234] disabling docker service ...
	I1202 20:54:54.540175  743547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:54:54.557954  743547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:54:54.575642  743547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:54:54.677171  743547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:54:54.787938  743547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:54:54.805380  743547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:54:54.824665  743547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1202 20:54:54.824729  743547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:54.837044  743547 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 20:54:54.837142  743547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:54.849210  743547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:54.860907  743547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:54.871629  743547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:54:54.882082  743547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:54.893928  743547 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:54.905219  743547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:54.917032  743547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:54:54.927659  743547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:54:54.938429  743547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:54:55.059022  743547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:54:55.238974  743547 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:54:55.239099  743547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:54:55.245135  743547 start.go:564] Will wait 60s for crictl version
	I1202 20:54:55.245210  743547 ssh_runner.go:195] Run: which crictl
	I1202 20:54:55.250232  743547 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:54:55.282324  743547 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:54:55.282412  743547 ssh_runner.go:195] Run: crio --version
	I1202 20:54:55.320935  743547 ssh_runner.go:195] Run: crio --version
	I1202 20:54:55.361998  743547 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1202 20:54:52.527997  736301 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.583967669s
	I1202 20:54:53.622779  736301 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.678806737s
	I1202 20:54:55.446643  736301 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502546315s
	I1202 20:54:55.467578  736301 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 20:54:55.486539  736301 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 20:54:55.505049  736301 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 20:54:55.505398  736301 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-997805 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 20:54:55.516932  736301 kubeadm.go:319] [bootstrap-token] Using token: clatot.hc48jyk0hvxonz06
	I1202 20:54:51.758445  744523 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1202 20:54:51.758787  744523 start.go:159] libmachine.API.Create for "newest-cni-245604" (driver="docker")
	I1202 20:54:51.758834  744523 client.go:173] LocalClient.Create starting
	I1202 20:54:51.758936  744523 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem
	I1202 20:54:51.759008  744523 main.go:143] libmachine: Decoding PEM data...
	I1202 20:54:51.759032  744523 main.go:143] libmachine: Parsing certificate...
	I1202 20:54:51.759118  744523 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem
	I1202 20:54:51.759148  744523 main.go:143] libmachine: Decoding PEM data...
	I1202 20:54:51.759171  744523 main.go:143] libmachine: Parsing certificate...
	I1202 20:54:51.759637  744523 cli_runner.go:164] Run: docker network inspect newest-cni-245604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 20:54:51.781898  744523 cli_runner.go:211] docker network inspect newest-cni-245604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 20:54:51.781982  744523 network_create.go:284] running [docker network inspect newest-cni-245604] to gather additional debugging logs...
	I1202 20:54:51.782006  744523 cli_runner.go:164] Run: docker network inspect newest-cni-245604
	W1202 20:54:51.801637  744523 cli_runner.go:211] docker network inspect newest-cni-245604 returned with exit code 1
	I1202 20:54:51.801678  744523 network_create.go:287] error running [docker network inspect newest-cni-245604]: docker network inspect newest-cni-245604: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-245604 not found
	I1202 20:54:51.801697  744523 network_create.go:289] output of [docker network inspect newest-cni-245604]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-245604 not found
	
	** /stderr **
	I1202 20:54:51.801890  744523 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:54:51.824870  744523 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-acf081edf266 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:04:c0:60:47:62} reservation:<nil>}
	I1202 20:54:51.825911  744523 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9623a21fb225 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:fc:8b:40:15:1b} reservation:<nil>}
	I1202 20:54:51.826609  744523 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f2b79e7e26a5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:c7:f4:38:1c:32} reservation:<nil>}
	I1202 20:54:51.827584  744523 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-be4fb772701b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:87:5f:38:96:b7} reservation:<nil>}
	I1202 20:54:51.828542  744523 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-13fe483902b9 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:a2:a4:21:b2:62:5a} reservation:<nil>}
	I1202 20:54:51.829195  744523 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-65ab470fa0e2 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:16:23:28:7c:c5:24} reservation:<nil>}
	I1202 20:54:51.830231  744523 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed3d00}
	I1202 20:54:51.830266  744523 network_create.go:124] attempt to create docker network newest-cni-245604 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1202 20:54:51.830316  744523 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-245604 newest-cni-245604
	I1202 20:54:51.887973  744523 network_create.go:108] docker network newest-cni-245604 192.168.103.0/24 created
	I1202 20:54:51.888023  744523 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-245604" container
	I1202 20:54:51.888128  744523 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 20:54:51.909991  744523 cli_runner.go:164] Run: docker volume create newest-cni-245604 --label name.minikube.sigs.k8s.io=newest-cni-245604 --label created_by.minikube.sigs.k8s.io=true
	I1202 20:54:51.933849  744523 oci.go:103] Successfully created a docker volume newest-cni-245604
	I1202 20:54:51.933969  744523 cli_runner.go:164] Run: docker run --rm --name newest-cni-245604-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-245604 --entrypoint /usr/bin/test -v newest-cni-245604:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 20:54:52.386347  744523 oci.go:107] Successfully prepared a docker volume newest-cni-245604
	I1202 20:54:52.386442  744523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1202 20:54:52.386653  744523 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1202 20:54:52.386714  744523 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1202 20:54:52.386763  744523 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 20:54:52.468472  744523 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-245604 --name newest-cni-245604 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-245604 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-245604 --network newest-cni-245604 --ip 192.168.103.2 --volume newest-cni-245604:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 20:54:52.834787  744523 cli_runner.go:164] Run: docker container inspect newest-cni-245604 --format={{.State.Running}}
	I1202 20:54:52.859568  744523 cli_runner.go:164] Run: docker container inspect newest-cni-245604 --format={{.State.Status}}
	I1202 20:54:52.888318  744523 cli_runner.go:164] Run: docker exec newest-cni-245604 stat /var/lib/dpkg/alternatives/iptables
	I1202 20:54:52.947034  744523 oci.go:144] the created container "newest-cni-245604" has a running status.
	I1202 20:54:52.947106  744523 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa...
	I1202 20:54:53.161566  744523 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 20:54:53.197985  744523 cli_runner.go:164] Run: docker container inspect newest-cni-245604 --format={{.State.Status}}
	I1202 20:54:53.229219  744523 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 20:54:53.229249  744523 kic_runner.go:114] Args: [docker exec --privileged newest-cni-245604 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 20:54:53.293954  744523 cli_runner.go:164] Run: docker container inspect newest-cni-245604 --format={{.State.Status}}
	I1202 20:54:53.319791  744523 machine.go:94] provisionDockerMachine start ...
	I1202 20:54:53.319987  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:53.347829  744523 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:53.348214  744523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1202 20:54:53.348237  744523 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:54:53.514601  744523 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-245604
	
	I1202 20:54:53.514632  744523 ubuntu.go:182] provisioning hostname "newest-cni-245604"
	I1202 20:54:53.514706  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:53.543984  744523 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:53.544329  744523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1202 20:54:53.544354  744523 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-245604 && echo "newest-cni-245604" | sudo tee /etc/hostname
	I1202 20:54:53.729217  744523 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-245604
	
	I1202 20:54:53.729302  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:53.755581  744523 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:53.755911  744523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1202 20:54:53.755944  744523 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-245604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-245604/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-245604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:54:53.904745  744523 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:54:53.904773  744523 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-407427/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-407427/.minikube}
	I1202 20:54:53.904818  744523 ubuntu.go:190] setting up certificates
	I1202 20:54:53.904831  744523 provision.go:84] configureAuth start
	I1202 20:54:53.904887  744523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-245604
	I1202 20:54:53.926340  744523 provision.go:143] copyHostCerts
	I1202 20:54:53.926412  744523 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem, removing ...
	I1202 20:54:53.926426  744523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem
	I1202 20:54:53.926508  744523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem (1082 bytes)
	I1202 20:54:53.926637  744523 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem, removing ...
	I1202 20:54:53.926646  744523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem
	I1202 20:54:53.926677  744523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem (1123 bytes)
	I1202 20:54:53.926741  744523 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem, removing ...
	I1202 20:54:53.926749  744523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem
	I1202 20:54:53.926776  744523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem (1675 bytes)
	I1202 20:54:53.926832  744523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem org=jenkins.newest-cni-245604 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-245604]
	I1202 20:54:54.033669  744523 provision.go:177] copyRemoteCerts
	I1202 20:54:54.033748  744523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:54:54.033805  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:54.055356  744523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:54:54.161586  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:54:54.183507  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 20:54:54.203578  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 20:54:54.223521  744523 provision.go:87] duration metric: took 318.655712ms to configureAuth
	I1202 20:54:54.223562  744523 ubuntu.go:206] setting minikube options for container-runtime
	I1202 20:54:54.223787  744523 config.go:182] Loaded profile config "newest-cni-245604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:54:54.223932  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:54.243976  744523 main.go:143] libmachine: Using SSH client type: native
	I1202 20:54:54.244266  744523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1202 20:54:54.244285  744523 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:54:54.563270  744523 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:54:54.563301  744523 machine.go:97] duration metric: took 1.243461731s to provisionDockerMachine
	I1202 20:54:54.563315  744523 client.go:176] duration metric: took 2.804467588s to LocalClient.Create
	I1202 20:54:54.563333  744523 start.go:167] duration metric: took 2.804549056s to libmachine.API.Create "newest-cni-245604"
	I1202 20:54:54.563343  744523 start.go:293] postStartSetup for "newest-cni-245604" (driver="docker")
	I1202 20:54:54.563359  744523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:54:54.563434  744523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:54:54.563487  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:54.587633  744523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:54:54.704139  744523 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:54:54.711871  744523 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:54:54.711907  744523 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:54:54.711923  744523 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 20:54:54.711998  744523 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 20:54:54.712158  744523 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem -> 4110322.pem in /etc/ssl/certs
	I1202 20:54:54.712308  744523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:54:54.727333  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:54:54.756096  744523 start.go:296] duration metric: took 192.737221ms for postStartSetup
	I1202 20:54:54.756539  744523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-245604
	I1202 20:54:54.779332  744523 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/config.json ...
	I1202 20:54:54.779682  744523 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:54:54.779734  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:54.804251  744523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:54:54.909217  744523 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:54:54.915212  744523 start.go:128] duration metric: took 3.160001099s to createHost
	I1202 20:54:54.915249  744523 start.go:83] releasing machines lock for "newest-cni-245604", held for 3.160217279s
	I1202 20:54:54.915329  744523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-245604
	I1202 20:54:54.939674  744523 ssh_runner.go:195] Run: cat /version.json
	I1202 20:54:54.939748  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:54.939782  744523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:54:54.939880  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:54:54.964142  744523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:54:54.965218  744523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:54:55.150195  744523 ssh_runner.go:195] Run: systemctl --version
	I1202 20:54:55.159061  744523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:54:55.203041  744523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:54:55.209011  744523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:54:55.209128  744523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:54:55.242651  744523 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 20:54:55.242680  744523 start.go:496] detecting cgroup driver to use...
	I1202 20:54:55.242718  744523 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 20:54:55.242772  744523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:54:55.265988  744523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:54:55.283822  744523 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:54:55.283891  744523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:54:55.306452  744523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:54:55.330861  744523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:54:55.437811  744523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:54:55.558513  744523 docker.go:234] disabling docker service ...
	I1202 20:54:55.558591  744523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:54:55.580602  744523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:54:55.596697  744523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:54:55.714954  744523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:54:55.820710  744523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:54:55.834948  744523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:54:55.852971  744523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:54:55.853038  744523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:55.866995  744523 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 20:54:55.867101  744523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:55.884788  744523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:55.901200  744523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:55.918342  744523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:54:55.928191  744523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:55.938885  744523 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:55.955266  744523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:54:55.965380  744523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:54:55.974592  744523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:54:55.983203  744523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:54:56.089565  744523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:54:56.246748  744523 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:54:56.246822  744523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:54:56.251650  744523 start.go:564] Will wait 60s for crictl version
	I1202 20:54:56.251725  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:56.259643  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:54:56.294960  744523 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:54:56.295118  744523 ssh_runner.go:195] Run: crio --version
	I1202 20:54:56.335315  744523 ssh_runner.go:195] Run: crio --version
	I1202 20:54:56.375510  744523 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 20:54:56.376891  744523 cli_runner.go:164] Run: docker network inspect newest-cni-245604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:54:56.404101  744523 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1202 20:54:56.410059  744523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:54:56.428224  744523 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1202 20:54:55.363273  743547 cli_runner.go:164] Run: docker network inspect old-k8s-version-992336 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:54:55.391463  743547 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1202 20:54:55.395875  743547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:54:55.407541  743547 kubeadm.go:884] updating cluster {Name:old-k8s-version-992336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-992336 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:54:55.407687  743547 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1202 20:54:55.407752  743547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:54:55.448888  743547 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:54:55.448914  743547 crio.go:433] Images already preloaded, skipping extraction
	I1202 20:54:55.448981  743547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:54:55.488955  743547 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:54:55.488987  743547 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:54:55.488997  743547 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 crio true true} ...
	I1202 20:54:55.489187  743547 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-992336 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-992336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:54:55.489281  743547 ssh_runner.go:195] Run: crio config
	I1202 20:54:55.555002  743547 cni.go:84] Creating CNI manager for ""
	I1202 20:54:55.555029  743547 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:54:55.555046  743547 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:54:55.555089  743547 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-992336 NodeName:old-k8s-version-992336 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:54:55.555302  743547 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-992336"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:54:55.555391  743547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1202 20:54:55.564702  743547 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:54:55.564796  743547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:54:55.574017  743547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1202 20:54:55.590044  743547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:54:55.607238  743547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1202 20:54:55.624302  743547 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1202 20:54:55.629565  743547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:54:55.647331  743547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:54:55.746705  743547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:54:55.778223  743547 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336 for IP: 192.168.94.2
	I1202 20:54:55.778263  743547 certs.go:195] generating shared ca certs ...
	I1202 20:54:55.778286  743547 certs.go:227] acquiring lock for ca certs: {Name:mkea11924193dd4dc459e16d70207bde383196df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:54:55.778470  743547 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key
	I1202 20:54:55.778540  743547 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key
	I1202 20:54:55.778555  743547 certs.go:257] generating profile certs ...
	I1202 20:54:55.778691  743547 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336/client.key
	I1202 20:54:55.778774  743547 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336/apiserver.key.26e20487
	I1202 20:54:55.778826  743547 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336/proxy-client.key
	I1202 20:54:55.778974  743547 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem (1338 bytes)
	W1202 20:54:55.779023  743547 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032_empty.pem, impossibly tiny 0 bytes
	I1202 20:54:55.779039  743547 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 20:54:55.779165  743547 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:54:55.779217  743547 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:54:55.779265  743547 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem (1675 bytes)
	I1202 20:54:55.779335  743547 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:54:55.780235  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:54:55.803356  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:54:55.826463  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:54:55.847561  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:54:55.875979  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1202 20:54:55.904532  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 20:54:55.931492  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:54:55.951900  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/old-k8s-version-992336/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 20:54:55.972640  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:54:55.992667  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem --> /usr/share/ca-certificates/411032.pem (1338 bytes)
	I1202 20:54:56.015555  743547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /usr/share/ca-certificates/4110322.pem (1708 bytes)
	I1202 20:54:56.042035  743547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:54:56.059891  743547 ssh_runner.go:195] Run: openssl version
	I1202 20:54:56.068335  743547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:54:56.079667  743547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:54:56.085893  743547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:54:56.085977  743547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:54:56.143330  743547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:54:56.156665  743547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/411032.pem && ln -fs /usr/share/ca-certificates/411032.pem /etc/ssl/certs/411032.pem"
	I1202 20:54:56.169457  743547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/411032.pem
	I1202 20:54:56.174154  743547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 20:13 /usr/share/ca-certificates/411032.pem
	I1202 20:54:56.174225  743547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/411032.pem
	I1202 20:54:56.213730  743547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/411032.pem /etc/ssl/certs/51391683.0"
	I1202 20:54:56.223332  743547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110322.pem && ln -fs /usr/share/ca-certificates/4110322.pem /etc/ssl/certs/4110322.pem"
	I1202 20:54:56.233176  743547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110322.pem
	I1202 20:54:56.237408  743547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 20:13 /usr/share/ca-certificates/4110322.pem
	I1202 20:54:56.237477  743547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110322.pem
	I1202 20:54:56.290593  743547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4110322.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:54:56.304474  743547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:54:56.310604  743547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 20:54:56.360515  743547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 20:54:56.413594  743547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 20:54:56.475091  743547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 20:54:56.542472  743547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 20:54:56.584464  743547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 20:54:56.628756  743547 kubeadm.go:401] StartCluster: {Name:old-k8s-version-992336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-992336 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:54:56.628871  743547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:54:56.628955  743547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:54:56.671457  743547 cri.go:89] found id: "b1921b3926c4fba551a94a0ec78b54be832b8754401c93ba491ed82e1b71e6be"
	I1202 20:54:56.671542  743547 cri.go:89] found id: "e1e39d0565d3822bf2f251fdb0e8de5f07938ae3aad30710f3eb435ed8294864"
	I1202 20:54:56.671588  743547 cri.go:89] found id: "b30d0a318021ad78d96505cbec12dab08e463997373813e56adc6e14d585834d"
	I1202 20:54:56.671610  743547 cri.go:89] found id: "670db3462ea1c5beb2d55dfd0859b3df17a3bf33ad117a56693583fcb4ccdd66"
	I1202 20:54:56.671636  743547 cri.go:89] found id: ""
	I1202 20:54:56.671705  743547 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 20:54:56.690130  743547 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:54:56Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:54:56.690230  743547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:54:56.708246  743547 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 20:54:56.708273  743547 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 20:54:56.708319  743547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 20:54:56.720174  743547 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:54:56.721412  743547 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-992336" does not appear in /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:54:56.721919  743547 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-407427/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-992336" cluster setting kubeconfig missing "old-k8s-version-992336" context setting]
	I1202 20:54:56.723060  743547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:54:56.725527  743547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 20:54:56.740149  743547 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1202 20:54:56.740191  743547 kubeadm.go:602] duration metric: took 31.910169ms to restartPrimaryControlPlane
	I1202 20:54:56.740203  743547 kubeadm.go:403] duration metric: took 111.45868ms to StartCluster
	I1202 20:54:56.740224  743547 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:54:56.740303  743547 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:54:56.741496  743547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:54:56.741802  743547 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:54:56.742098  743547 config.go:182] Loaded profile config "old-k8s-version-992336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1202 20:54:56.742170  743547 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:54:56.742263  743547 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-992336"
	I1202 20:54:56.742288  743547 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-992336"
	W1202 20:54:56.742297  743547 addons.go:248] addon storage-provisioner should already be in state true
	I1202 20:54:56.742330  743547 host.go:66] Checking if "old-k8s-version-992336" exists ...
	I1202 20:54:56.742855  743547 cli_runner.go:164] Run: docker container inspect old-k8s-version-992336 --format={{.State.Status}}
	I1202 20:54:56.742984  743547 addons.go:70] Setting dashboard=true in profile "old-k8s-version-992336"
	I1202 20:54:56.743010  743547 addons.go:239] Setting addon dashboard=true in "old-k8s-version-992336"
	W1202 20:54:56.743021  743547 addons.go:248] addon dashboard should already be in state true
	I1202 20:54:56.743017  743547 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-992336"
	I1202 20:54:56.743057  743547 host.go:66] Checking if "old-k8s-version-992336" exists ...
	I1202 20:54:56.743058  743547 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-992336"
	I1202 20:54:56.743415  743547 cli_runner.go:164] Run: docker container inspect old-k8s-version-992336 --format={{.State.Status}}
	I1202 20:54:56.743565  743547 cli_runner.go:164] Run: docker container inspect old-k8s-version-992336 --format={{.State.Status}}
	I1202 20:54:56.747183  743547 out.go:179] * Verifying Kubernetes components...
	I1202 20:54:56.751095  743547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:54:56.779215  743547 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:56.779222  743547 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 20:54:56.780910  743547 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:54:56.780933  743547 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 20:54:56.780934  743547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:54:55.518402  736301 out.go:252]   - Configuring RBAC rules ...
	I1202 20:54:55.518551  736301 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 20:54:55.525177  736301 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 20:54:55.532974  736301 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 20:54:55.536672  736301 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 20:54:55.540648  736301 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 20:54:55.544671  736301 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 20:54:55.854962  736301 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 20:54:56.282748  736301 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 20:54:56.855924  736301 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 20:54:56.858599  736301 kubeadm.go:319] 
	I1202 20:54:56.858728  736301 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 20:54:56.858735  736301 kubeadm.go:319] 
	I1202 20:54:56.858833  736301 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 20:54:56.858838  736301 kubeadm.go:319] 
	I1202 20:54:56.858870  736301 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 20:54:56.858943  736301 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 20:54:56.859016  736301 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 20:54:56.859022  736301 kubeadm.go:319] 
	I1202 20:54:56.859103  736301 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 20:54:56.859109  736301 kubeadm.go:319] 
	I1202 20:54:56.859165  736301 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 20:54:56.859178  736301 kubeadm.go:319] 
	I1202 20:54:56.859235  736301 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 20:54:56.859323  736301 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 20:54:56.859397  736301 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 20:54:56.859403  736301 kubeadm.go:319] 
	I1202 20:54:56.859502  736301 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 20:54:56.859589  736301 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 20:54:56.859596  736301 kubeadm.go:319] 
	I1202 20:54:56.859693  736301 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token clatot.hc48jyk0hvxonz06 \
	I1202 20:54:56.859818  736301 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:48349ffa2369b87cd15480ece56edb1c5c5d97a828930f9dfbf1d1ceccf80ff4 \
	I1202 20:54:56.859842  736301 kubeadm.go:319] 	--control-plane 
	I1202 20:54:56.859847  736301 kubeadm.go:319] 
	I1202 20:54:56.859939  736301 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 20:54:56.859945  736301 kubeadm.go:319] 
	I1202 20:54:56.860051  736301 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token clatot.hc48jyk0hvxonz06 \
	I1202 20:54:56.860179  736301 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:48349ffa2369b87cd15480ece56edb1c5c5d97a828930f9dfbf1d1ceccf80ff4 
	I1202 20:54:56.865687  736301 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1202 20:54:56.865923  736301 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 20:54:56.865962  736301 cni.go:84] Creating CNI manager for ""
	I1202 20:54:56.865975  736301 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:54:56.868615  736301 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1202 20:54:55.389753  727677 node_ready.go:57] node "no-preload-336331" has "Ready":"False" status (will retry)
	W1202 20:54:57.391499  727677 node_ready.go:57] node "no-preload-336331" has "Ready":"False" status (will retry)
	I1202 20:54:57.889990  727677 node_ready.go:49] node "no-preload-336331" is "Ready"
	I1202 20:54:57.890026  727677 node_ready.go:38] duration metric: took 13.504157695s for node "no-preload-336331" to be "Ready" ...
	I1202 20:54:57.890044  727677 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:54:57.890144  727677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:54:57.912775  727677 api_server.go:72] duration metric: took 13.890609716s to wait for apiserver process to appear ...
	I1202 20:54:57.912809  727677 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:54:57.912934  727677 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1202 20:54:57.923648  727677 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1202 20:54:57.925968  727677 api_server.go:141] control plane version: v1.35.0-beta.0
	I1202 20:54:57.926004  727677 api_server.go:131] duration metric: took 13.121364ms to wait for apiserver health ...
	I1202 20:54:57.926015  727677 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:54:57.930714  727677 system_pods.go:59] 8 kube-system pods found
	I1202 20:54:57.930823  727677 system_pods.go:61] "coredns-7d764666f9-ghxk6" [1696ea67-a1db-437c-bada-07c12d4e9fc8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:54:57.930836  727677 system_pods.go:61] "etcd-no-preload-336331" [7e4664de-2a98-4d1e-911f-2cb479f4a42c] Running
	I1202 20:54:57.930844  727677 system_pods.go:61] "kindnet-5blk7" [8fc0ac0d-1c00-4916-99fd-a1b4e6eec75e] Running
	I1202 20:54:57.930851  727677 system_pods.go:61] "kube-apiserver-no-preload-336331" [09086c71-7e4a-40ce-b450-3a3a76d2b092] Running
	I1202 20:54:57.930880  727677 system_pods.go:61] "kube-controller-manager-no-preload-336331" [d556ac70-884a-46d0-aa2d-4fbd065aa125] Running
	I1202 20:54:57.930886  727677 system_pods.go:61] "kube-proxy-qc2v9" [91426b3b-e557-4959-91b3-cb5e256351ac] Running
	I1202 20:54:57.930901  727677 system_pods.go:61] "kube-scheduler-no-preload-336331" [b648b0ee-a3d0-41d2-93b9-fe72216bcec3] Running
	I1202 20:54:57.930910  727677 system_pods.go:61] "storage-provisioner" [e3c38dcd-7f1f-4382-bf82-b09cde780bdb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:54:57.930921  727677 system_pods.go:74] duration metric: took 4.81671ms to wait for pod list to return data ...
	I1202 20:54:57.930933  727677 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:54:57.934602  727677 default_sa.go:45] found service account: "default"
	I1202 20:54:57.934629  727677 default_sa.go:55] duration metric: took 3.687516ms for default service account to be created ...
	I1202 20:54:57.934641  727677 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 20:54:57.939126  727677 system_pods.go:86] 8 kube-system pods found
	I1202 20:54:57.939176  727677 system_pods.go:89] "coredns-7d764666f9-ghxk6" [1696ea67-a1db-437c-bada-07c12d4e9fc8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:54:57.939186  727677 system_pods.go:89] "etcd-no-preload-336331" [7e4664de-2a98-4d1e-911f-2cb479f4a42c] Running
	I1202 20:54:57.939194  727677 system_pods.go:89] "kindnet-5blk7" [8fc0ac0d-1c00-4916-99fd-a1b4e6eec75e] Running
	I1202 20:54:57.939200  727677 system_pods.go:89] "kube-apiserver-no-preload-336331" [09086c71-7e4a-40ce-b450-3a3a76d2b092] Running
	I1202 20:54:57.939207  727677 system_pods.go:89] "kube-controller-manager-no-preload-336331" [d556ac70-884a-46d0-aa2d-4fbd065aa125] Running
	I1202 20:54:57.939212  727677 system_pods.go:89] "kube-proxy-qc2v9" [91426b3b-e557-4959-91b3-cb5e256351ac] Running
	I1202 20:54:57.939217  727677 system_pods.go:89] "kube-scheduler-no-preload-336331" [b648b0ee-a3d0-41d2-93b9-fe72216bcec3] Running
	I1202 20:54:57.939225  727677 system_pods.go:89] "storage-provisioner" [e3c38dcd-7f1f-4382-bf82-b09cde780bdb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:54:57.939256  727677 retry.go:31] will retry after 254.058998ms: missing components: kube-dns
	I1202 20:54:58.199625  727677 system_pods.go:86] 8 kube-system pods found
	I1202 20:54:58.199671  727677 system_pods.go:89] "coredns-7d764666f9-ghxk6" [1696ea67-a1db-437c-bada-07c12d4e9fc8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:54:58.199680  727677 system_pods.go:89] "etcd-no-preload-336331" [7e4664de-2a98-4d1e-911f-2cb479f4a42c] Running
	I1202 20:54:58.199689  727677 system_pods.go:89] "kindnet-5blk7" [8fc0ac0d-1c00-4916-99fd-a1b4e6eec75e] Running
	I1202 20:54:58.199696  727677 system_pods.go:89] "kube-apiserver-no-preload-336331" [09086c71-7e4a-40ce-b450-3a3a76d2b092] Running
	I1202 20:54:58.199703  727677 system_pods.go:89] "kube-controller-manager-no-preload-336331" [d556ac70-884a-46d0-aa2d-4fbd065aa125] Running
	I1202 20:54:58.199708  727677 system_pods.go:89] "kube-proxy-qc2v9" [91426b3b-e557-4959-91b3-cb5e256351ac] Running
	I1202 20:54:58.199713  727677 system_pods.go:89] "kube-scheduler-no-preload-336331" [b648b0ee-a3d0-41d2-93b9-fe72216bcec3] Running
	I1202 20:54:58.199722  727677 system_pods.go:89] "storage-provisioner" [e3c38dcd-7f1f-4382-bf82-b09cde780bdb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:54:58.199742  727677 retry.go:31] will retry after 342.156745ms: missing components: kube-dns
	I1202 20:54:56.780993  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:56.782584  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 20:54:56.782619  743547 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 20:54:56.782691  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:56.784631  743547 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-992336"
	W1202 20:54:56.784664  743547 addons.go:248] addon default-storageclass should already be in state true
	I1202 20:54:56.784697  743547 host.go:66] Checking if "old-k8s-version-992336" exists ...
	I1202 20:54:56.786161  743547 cli_runner.go:164] Run: docker container inspect old-k8s-version-992336 --format={{.State.Status}}
	I1202 20:54:56.831348  743547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/old-k8s-version-992336/id_rsa Username:docker}
	I1202 20:54:56.838761  743547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/old-k8s-version-992336/id_rsa Username:docker}
	I1202 20:54:56.839118  743547 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:54:56.839144  743547 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:54:56.839212  743547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:54:56.877157  743547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/old-k8s-version-992336/id_rsa Username:docker}
	I1202 20:54:57.000378  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 20:54:57.000478  743547 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 20:54:57.001473  743547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:54:57.051688  743547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:54:57.053612  743547 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-992336" to be "Ready" ...
	I1202 20:54:57.062772  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 20:54:57.062802  743547 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 20:54:57.099632  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 20:54:57.099665  743547 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 20:54:57.102715  743547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:54:57.128982  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 20:54:57.129013  743547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 20:54:57.151853  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 20:54:57.151871  743547 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 20:54:57.180800  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 20:54:57.180826  743547 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 20:54:57.207394  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 20:54:57.207423  743547 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 20:54:57.238669  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 20:54:57.238701  743547 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 20:54:57.264954  743547 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:54:57.265009  743547 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 20:54:57.288116  743547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:54:59.263131  743547 node_ready.go:49] node "old-k8s-version-992336" is "Ready"
	I1202 20:54:59.263168  743547 node_ready.go:38] duration metric: took 2.209490941s for node "old-k8s-version-992336" to be "Ready" ...
	I1202 20:54:59.263187  743547 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:54:59.263244  743547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:55:00.033214  743547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.981484522s)
	I1202 20:55:00.033304  743547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.93055748s)
	I1202 20:55:00.490811  743547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.202644047s)
	I1202 20:55:00.490986  743547 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.227720068s)
	I1202 20:55:00.491022  743547 api_server.go:72] duration metric: took 3.749188411s to wait for apiserver process to appear ...
	I1202 20:55:00.491030  743547 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:55:00.491062  743547 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1202 20:55:00.493010  743547 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-992336 addons enable metrics-server
	
	I1202 20:55:00.494606  743547 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1202 20:54:58.547286  727677 system_pods.go:86] 8 kube-system pods found
	I1202 20:54:58.547327  727677 system_pods.go:89] "coredns-7d764666f9-ghxk6" [1696ea67-a1db-437c-bada-07c12d4e9fc8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:54:58.547335  727677 system_pods.go:89] "etcd-no-preload-336331" [7e4664de-2a98-4d1e-911f-2cb479f4a42c] Running
	I1202 20:54:58.547344  727677 system_pods.go:89] "kindnet-5blk7" [8fc0ac0d-1c00-4916-99fd-a1b4e6eec75e] Running
	I1202 20:54:58.547349  727677 system_pods.go:89] "kube-apiserver-no-preload-336331" [09086c71-7e4a-40ce-b450-3a3a76d2b092] Running
	I1202 20:54:58.547355  727677 system_pods.go:89] "kube-controller-manager-no-preload-336331" [d556ac70-884a-46d0-aa2d-4fbd065aa125] Running
	I1202 20:54:58.547359  727677 system_pods.go:89] "kube-proxy-qc2v9" [91426b3b-e557-4959-91b3-cb5e256351ac] Running
	I1202 20:54:58.547364  727677 system_pods.go:89] "kube-scheduler-no-preload-336331" [b648b0ee-a3d0-41d2-93b9-fe72216bcec3] Running
	I1202 20:54:58.547371  727677 system_pods.go:89] "storage-provisioner" [e3c38dcd-7f1f-4382-bf82-b09cde780bdb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:54:58.547389  727677 retry.go:31] will retry after 368.951031ms: missing components: kube-dns
	I1202 20:54:58.921450  727677 system_pods.go:86] 8 kube-system pods found
	I1202 20:54:58.921490  727677 system_pods.go:89] "coredns-7d764666f9-ghxk6" [1696ea67-a1db-437c-bada-07c12d4e9fc8] Running
	I1202 20:54:58.921499  727677 system_pods.go:89] "etcd-no-preload-336331" [7e4664de-2a98-4d1e-911f-2cb479f4a42c] Running
	I1202 20:54:58.921505  727677 system_pods.go:89] "kindnet-5blk7" [8fc0ac0d-1c00-4916-99fd-a1b4e6eec75e] Running
	I1202 20:54:58.921510  727677 system_pods.go:89] "kube-apiserver-no-preload-336331" [09086c71-7e4a-40ce-b450-3a3a76d2b092] Running
	I1202 20:54:58.921515  727677 system_pods.go:89] "kube-controller-manager-no-preload-336331" [d556ac70-884a-46d0-aa2d-4fbd065aa125] Running
	I1202 20:54:58.921520  727677 system_pods.go:89] "kube-proxy-qc2v9" [91426b3b-e557-4959-91b3-cb5e256351ac] Running
	I1202 20:54:58.921525  727677 system_pods.go:89] "kube-scheduler-no-preload-336331" [b648b0ee-a3d0-41d2-93b9-fe72216bcec3] Running
	I1202 20:54:58.921530  727677 system_pods.go:89] "storage-provisioner" [e3c38dcd-7f1f-4382-bf82-b09cde780bdb] Running
	I1202 20:54:58.921541  727677 system_pods.go:126] duration metric: took 986.887188ms to wait for k8s-apps to be running ...
	I1202 20:54:58.921550  727677 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 20:54:58.921604  727677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:54:58.936808  727677 system_svc.go:56] duration metric: took 15.220965ms WaitForService to wait for kubelet
	I1202 20:54:58.936842  727677 kubeadm.go:587] duration metric: took 14.914814409s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:54:58.936868  727677 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:54:58.940483  727677 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 20:54:58.940521  727677 node_conditions.go:123] node cpu capacity is 8
	I1202 20:54:58.940543  727677 node_conditions.go:105] duration metric: took 3.669091ms to run NodePressure ...
	I1202 20:54:58.940560  727677 start.go:242] waiting for startup goroutines ...
	I1202 20:54:58.940570  727677 start.go:247] waiting for cluster config update ...
	I1202 20:54:58.940582  727677 start.go:256] writing updated cluster config ...
	I1202 20:54:58.940940  727677 ssh_runner.go:195] Run: rm -f paused
	I1202 20:54:58.946442  727677 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:54:58.950994  727677 pod_ready.go:83] waiting for pod "coredns-7d764666f9-ghxk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:58.956333  727677 pod_ready.go:94] pod "coredns-7d764666f9-ghxk6" is "Ready"
	I1202 20:54:58.956362  727677 pod_ready.go:86] duration metric: took 5.338212ms for pod "coredns-7d764666f9-ghxk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:58.961022  727677 pod_ready.go:83] waiting for pod "etcd-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:58.967156  727677 pod_ready.go:94] pod "etcd-no-preload-336331" is "Ready"
	I1202 20:54:58.967197  727677 pod_ready.go:86] duration metric: took 6.143693ms for pod "etcd-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:58.970251  727677 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:58.975849  727677 pod_ready.go:94] pod "kube-apiserver-no-preload-336331" is "Ready"
	I1202 20:54:58.975894  727677 pod_ready.go:86] duration metric: took 5.606631ms for pod "kube-apiserver-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:58.979032  727677 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:59.351307  727677 pod_ready.go:94] pod "kube-controller-manager-no-preload-336331" is "Ready"
	I1202 20:54:59.351337  727677 pod_ready.go:86] duration metric: took 372.272976ms for pod "kube-controller-manager-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:59.552225  727677 pod_ready.go:83] waiting for pod "kube-proxy-qc2v9" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:54:59.951963  727677 pod_ready.go:94] pod "kube-proxy-qc2v9" is "Ready"
	I1202 20:54:59.952012  727677 pod_ready.go:86] duration metric: took 399.754386ms for pod "kube-proxy-qc2v9" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:00.151862  727677 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:00.551517  727677 pod_ready.go:94] pod "kube-scheduler-no-preload-336331" is "Ready"
	I1202 20:55:00.551567  727677 pod_ready.go:86] duration metric: took 399.673435ms for pod "kube-scheduler-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:00.551585  727677 pod_ready.go:40] duration metric: took 1.605104621s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:55:00.623116  727677 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1202 20:55:00.625337  727677 out.go:179] * Done! kubectl is now configured to use "no-preload-336331" cluster and "default" namespace by default
	I1202 20:54:56.429637  744523 kubeadm.go:884] updating cluster {Name:newest-cni-245604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-245604 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:54:56.429813  744523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 20:54:56.429873  744523 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:54:56.470335  744523 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1202 20:54:56.470367  744523 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1202 20:54:56.470443  744523 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:56.470709  744523 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:56.470835  744523 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:56.470944  744523 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:56.471113  744523 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:56.471227  744523 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1202 20:54:56.471312  744523 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:56.471416  744523 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:56.474235  744523 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1202 20:54:56.474674  744523 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:56.474720  744523 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:56.474716  744523 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:56.474788  744523 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:56.475527  744523 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:56.475871  744523 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:56.476514  744523 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:56.627881  744523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:56.635408  744523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:56.645721  744523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:56.656260  744523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:56.665724  744523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:56.674018  744523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1202 20:54:56.686804  744523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:56.690583  744523 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1202 20:54:56.690704  744523 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:56.690760  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:56.707645  744523 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1202 20:54:56.707701  744523 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:56.707771  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:56.729690  744523 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1202 20:54:56.729741  744523 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:56.729790  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:56.730634  744523 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1202 20:54:56.730670  744523 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:56.730712  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:56.748602  744523 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1202 20:54:56.748650  744523 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:56.748713  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:56.779664  744523 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1202 20:54:56.779729  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:56.779748  744523 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1202 20:54:56.779805  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:56.779817  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:56.779663  744523 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1202 20:54:56.779842  744523 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:56.779872  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:56.779878  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:56.779903  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:56.779731  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:56.877780  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:56.893301  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 20:54:56.893403  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:56.893456  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:56.893522  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:56.893577  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:56.893630  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:56.979424  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:56.979467  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 20:54:56.979427  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 20:54:56.979522  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:54:56.979694  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:54:56.979787  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:54:56.979870  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:54:57.063429  744523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1202 20:54:57.063525  744523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1202 20:54:57.063574  744523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1202 20:54:57.063635  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 20:54:57.063715  744523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1202 20:54:57.063773  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 20:54:57.063798  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 20:54:57.063529  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1202 20:54:57.073765  744523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1202 20:54:57.073970  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 20:54:57.073976  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:54:57.074150  744523 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1202 20:54:57.074177  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1202 20:54:57.074309  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1202 20:54:57.090729  744523 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1202 20:54:57.090765  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1202 20:54:57.090852  744523 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1202 20:54:57.090867  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1202 20:54:57.091043  744523 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1202 20:54:57.091207  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1202 20:54:57.151485  744523 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1202 20:54:57.151520  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1202 20:54:57.151798  744523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1202 20:54:57.151964  744523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1202 20:54:57.152031  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1202 20:54:57.152553  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 20:54:57.254451  744523 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1202 20:54:57.254502  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1202 20:54:57.257229  744523 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1202 20:54:57.257317  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1202 20:54:57.392528  744523 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1202 20:54:57.392642  744523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1202 20:54:57.810758  744523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:57.869494  744523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1202 20:54:57.869554  744523 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 20:54:57.869628  744523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 20:54:57.932920  744523 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1202 20:54:57.932975  744523 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:57.933024  744523 ssh_runner.go:195] Run: which crictl
	I1202 20:54:59.294687  744523 ssh_runner.go:235] Completed: which crictl: (1.361639017s)
	I1202 20:54:59.294768  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:59.294838  744523 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.425189868s)
	I1202 20:54:59.294869  744523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1202 20:54:59.294918  744523 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 20:54:59.294967  744523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 20:55:00.817466  744523 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.522668777s)
	I1202 20:55:00.817551  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:55:00.817627  744523 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.522632151s)
	I1202 20:55:00.817648  744523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1202 20:55:00.817674  744523 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1202 20:55:00.817704  744523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1202 20:55:00.848635  744523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:54:56.870332  736301 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 20:54:56.877419  736301 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1202 20:54:56.877436  736301 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 20:54:56.902275  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 20:54:57.337788  736301 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 20:54:57.337991  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:54:57.338104  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-997805 minikube.k8s.io/updated_at=2025_12_02T20_54_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92 minikube.k8s.io/name=default-k8s-diff-port-997805 minikube.k8s.io/primary=true
	I1202 20:54:57.477817  736301 ops.go:34] apiserver oom_adj: -16
	I1202 20:54:57.477829  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:54:57.978414  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:54:58.478319  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:54:58.980154  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:54:59.478288  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:54:59.978296  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:00.478855  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:00.978336  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:01.478217  736301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:01.560150  736301 kubeadm.go:1114] duration metric: took 4.222209683s to wait for elevateKubeSystemPrivileges
	I1202 20:55:01.560198  736301 kubeadm.go:403] duration metric: took 16.697560258s to StartCluster
	I1202 20:55:01.560223  736301 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:01.560308  736301 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:01.561505  736301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:01.561778  736301 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:55:01.561831  736301 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:55:01.561928  736301 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:01.561953  736301 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:01.561973  736301 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-997805"
	I1202 20:55:01.561980  736301 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-997805"
	I1202 20:55:01.561813  736301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 20:55:01.562021  736301 config.go:182] Loaded profile config "default-k8s-diff-port-997805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:55:01.562004  736301 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:01.562425  736301 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:01.562664  736301 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:01.564706  736301 out.go:179] * Verifying Kubernetes components...
	I1202 20:55:01.566104  736301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:01.589813  736301 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-997805"
	I1202 20:55:01.589873  736301 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:01.590425  736301 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:01.590987  736301 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:55:01.592179  736301 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:01.592201  736301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:55:01.592270  736301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:01.619646  736301 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:01.619694  736301 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:55:01.619759  736301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:01.627920  736301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:01.654225  736301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:01.682285  736301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 20:55:01.736624  736301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:55:01.766566  736301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:01.788518  736301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:01.900235  736301 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1202 20:55:01.901603  736301 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-997805" to be "Ready" ...
	I1202 20:55:02.127286  736301 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1202 20:55:00.495919  743547 addons.go:530] duration metric: took 3.753750261s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1202 20:55:00.497622  743547 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1202 20:55:00.497666  743547 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1202 20:55:00.991191  743547 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1202 20:55:00.996136  743547 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1202 20:55:00.997346  743547 api_server.go:141] control plane version: v1.28.0
	I1202 20:55:00.997377  743547 api_server.go:131] duration metric: took 506.333183ms to wait for apiserver health ...
	I1202 20:55:00.997390  743547 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:55:01.001606  743547 system_pods.go:59] 8 kube-system pods found
	I1202 20:55:01.001663  743547 system_pods.go:61] "coredns-5dd5756b68-ptzsf" [14b9d2d2-4853-419f-ad27-5d6f4c9c7e2c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:55:01.001678  743547 system_pods.go:61] "etcd-old-k8s-version-992336" [22527607-8153-442e-97cb-93555cbcdd3a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:55:01.001689  743547 system_pods.go:61] "kindnet-jvmsp" [51a76a82-d4d0-4909-a7a7-49ad2e3fd9f0] Running
	I1202 20:55:01.001703  743547 system_pods.go:61] "kube-apiserver-old-k8s-version-992336" [5049999c-2987-49b7-ba74-9d7621b0759a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:55:01.001716  743547 system_pods.go:61] "kube-controller-manager-old-k8s-version-992336" [34f637f6-d1c4-4620-9705-439b4db0805a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:55:01.001727  743547 system_pods.go:61] "kube-proxy-qpzt8" [e7130e4a-3fd7-49ba-b6c6-ea6857c76765] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 20:55:01.001736  743547 system_pods.go:61] "kube-scheduler-old-k8s-version-992336" [c4e33a26-6df9-440c-9eff-9197bcdfd55c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:55:01.001748  743547 system_pods.go:61] "storage-provisioner" [398f9134-7016-4782-9541-255e9925dd8d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:55:01.001759  743547 system_pods.go:74] duration metric: took 4.359896ms to wait for pod list to return data ...
	I1202 20:55:01.001773  743547 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:55:01.004230  743547 default_sa.go:45] found service account: "default"
	I1202 20:55:01.004254  743547 default_sa.go:55] duration metric: took 2.473014ms for default service account to be created ...
	I1202 20:55:01.004265  743547 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 20:55:01.008022  743547 system_pods.go:86] 8 kube-system pods found
	I1202 20:55:01.008062  743547 system_pods.go:89] "coredns-5dd5756b68-ptzsf" [14b9d2d2-4853-419f-ad27-5d6f4c9c7e2c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:55:01.008112  743547 system_pods.go:89] "etcd-old-k8s-version-992336" [22527607-8153-442e-97cb-93555cbcdd3a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:55:01.008124  743547 system_pods.go:89] "kindnet-jvmsp" [51a76a82-d4d0-4909-a7a7-49ad2e3fd9f0] Running
	I1202 20:55:01.008135  743547 system_pods.go:89] "kube-apiserver-old-k8s-version-992336" [5049999c-2987-49b7-ba74-9d7621b0759a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:55:01.008173  743547 system_pods.go:89] "kube-controller-manager-old-k8s-version-992336" [34f637f6-d1c4-4620-9705-439b4db0805a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:55:01.008187  743547 system_pods.go:89] "kube-proxy-qpzt8" [e7130e4a-3fd7-49ba-b6c6-ea6857c76765] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 20:55:01.008197  743547 system_pods.go:89] "kube-scheduler-old-k8s-version-992336" [c4e33a26-6df9-440c-9eff-9197bcdfd55c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:55:01.008206  743547 system_pods.go:89] "storage-provisioner" [398f9134-7016-4782-9541-255e9925dd8d] Running
	I1202 20:55:01.008233  743547 system_pods.go:126] duration metric: took 3.944236ms to wait for k8s-apps to be running ...
	I1202 20:55:01.008249  743547 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 20:55:01.008306  743547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:55:01.025249  743547 system_svc.go:56] duration metric: took 16.988838ms WaitForService to wait for kubelet
	I1202 20:55:01.025289  743547 kubeadm.go:587] duration metric: took 4.283454748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:55:01.025313  743547 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:55:01.029446  743547 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 20:55:01.029479  743547 node_conditions.go:123] node cpu capacity is 8
	I1202 20:55:01.029504  743547 node_conditions.go:105] duration metric: took 4.184149ms to run NodePressure ...
	I1202 20:55:01.029523  743547 start.go:242] waiting for startup goroutines ...
	I1202 20:55:01.029535  743547 start.go:247] waiting for cluster config update ...
	I1202 20:55:01.029549  743547 start.go:256] writing updated cluster config ...
	I1202 20:55:01.029888  743547 ssh_runner.go:195] Run: rm -f paused
	I1202 20:55:01.034901  743547 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:55:01.039910  743547 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-ptzsf" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 20:55:03.046930  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	I1202 20:55:02.295814  744523 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.478083279s)
	I1202 20:55:02.295852  744523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1202 20:55:02.295876  744523 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1202 20:55:02.295882  744523 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.447208868s)
	I1202 20:55:02.295924  744523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1202 20:55:02.295933  744523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1202 20:55:02.296025  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1202 20:55:03.814698  744523 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.518744941s)
	I1202 20:55:03.814738  744523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1202 20:55:03.814764  744523 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 20:55:03.814810  744523 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.518762728s)
	I1202 20:55:03.814865  744523 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1202 20:55:03.814893  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1202 20:55:03.814817  744523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 20:55:04.925056  744523 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.110119383s)
	I1202 20:55:04.925120  744523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1202 20:55:04.925145  744523 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 20:55:04.925195  744523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 20:55:02.128586  736301 addons.go:530] duration metric: took 566.750529ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1202 20:55:02.404897  736301 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-997805" context rescaled to 1 replicas
	W1202 20:55:03.907516  736301 node_ready.go:57] node "default-k8s-diff-port-997805" has "Ready":"False" status (will retry)
	W1202 20:55:06.528176  736301 node_ready.go:57] node "default-k8s-diff-port-997805" has "Ready":"False" status (will retry)
	W1202 20:55:05.546607  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	W1202 20:55:08.053813  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	I1202 20:55:06.750340  744523 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.825116414s)
	I1202 20:55:06.750375  744523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1202 20:55:06.750420  744523 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1202 20:55:06.750473  744523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1202 20:55:07.327054  744523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1202 20:55:07.327129  744523 cache_images.go:125] Successfully loaded all cached images
	I1202 20:55:07.327138  744523 cache_images.go:94] duration metric: took 10.856753s to LoadCachedImages
	I1202 20:55:07.327165  744523 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1202 20:55:07.327304  744523 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-245604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-245604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:55:07.327405  744523 ssh_runner.go:195] Run: crio config
	I1202 20:55:07.379951  744523 cni.go:84] Creating CNI manager for ""
	I1202 20:55:07.379987  744523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:07.380012  744523 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1202 20:55:07.380052  744523 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-245604 NodeName:newest-cni-245604 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:55:07.380240  744523 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-245604"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:55:07.380326  744523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 20:55:07.391201  744523 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1202 20:55:07.391273  744523 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 20:55:07.401815  744523 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1202 20:55:07.401855  744523 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256
	I1202 20:55:07.401905  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1202 20:55:07.401953  744523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:55:07.401822  744523 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1202 20:55:07.402107  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1202 20:55:07.407476  744523 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1202 20:55:07.407517  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1202 20:55:07.407476  744523 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1202 20:55:07.407577  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1202 20:55:07.424591  744523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1202 20:55:07.473519  744523 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1202 20:55:07.473565  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1202 20:55:07.942534  744523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:55:07.951564  744523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1202 20:55:07.966391  744523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 20:55:07.983466  744523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1202 20:55:07.998388  744523 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1202 20:55:08.003218  744523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:08.014772  744523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:08.099183  744523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:55:08.128741  744523 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604 for IP: 192.168.103.2
	I1202 20:55:08.128766  744523 certs.go:195] generating shared ca certs ...
	I1202 20:55:08.128785  744523 certs.go:227] acquiring lock for ca certs: {Name:mkea11924193dd4dc459e16d70207bde383196df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:08.128953  744523 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key
	I1202 20:55:08.129005  744523 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key
	I1202 20:55:08.129016  744523 certs.go:257] generating profile certs ...
	I1202 20:55:08.129092  744523 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/client.key
	I1202 20:55:08.129113  744523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/client.crt with IP's: []
	I1202 20:55:08.294554  744523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/client.crt ...
	I1202 20:55:08.294593  744523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/client.crt: {Name:mk21b09addeeaa3d31317d267da0ba46cdbf969a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:08.294817  744523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/client.key ...
	I1202 20:55:08.294834  744523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/client.key: {Name:mke0819f820269a4f8de98b3294913aa1fec7fd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:08.294976  744523 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.key.b0e612d2
	I1202 20:55:08.295001  744523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.crt.b0e612d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1202 20:55:08.433583  744523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.crt.b0e612d2 ...
	I1202 20:55:08.433617  744523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.crt.b0e612d2: {Name:mkefebd269deae008218212f66f0a4f5a87aa20c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:08.433838  744523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.key.b0e612d2 ...
	I1202 20:55:08.433861  744523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.key.b0e612d2: {Name:mkd6ac856f0fd42299c25dbdfc17df9c0f05a80e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:08.433986  744523 certs.go:382] copying /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.crt.b0e612d2 -> /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.crt
	I1202 20:55:08.434083  744523 certs.go:386] copying /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.key.b0e612d2 -> /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.key
	I1202 20:55:08.434142  744523 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/proxy-client.key
	I1202 20:55:08.434160  744523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/proxy-client.crt with IP's: []
	I1202 20:55:08.761700  744523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/proxy-client.crt ...
	I1202 20:55:08.761739  744523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/proxy-client.crt: {Name:mk605b3a88d4c93e27b46e0a7f581a336524f65b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:08.761993  744523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/proxy-client.key ...
	I1202 20:55:08.762019  744523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/proxy-client.key: {Name:mk5be491623b73b348aa62d0bb88d46e4125409d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:08.762262  744523 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem (1338 bytes)
	W1202 20:55:08.762311  744523 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032_empty.pem, impossibly tiny 0 bytes
	I1202 20:55:08.762323  744523 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 20:55:08.762348  744523 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:55:08.762373  744523 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:55:08.762396  744523 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem (1675 bytes)
	I1202 20:55:08.762439  744523 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:55:08.762985  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:55:08.783273  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:55:08.806927  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:55:08.827696  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:55:08.848618  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 20:55:08.868662  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 20:55:08.891006  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:55:08.916236  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 20:55:08.936812  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:55:08.957826  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem --> /usr/share/ca-certificates/411032.pem (1338 bytes)
	I1202 20:55:08.977945  744523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /usr/share/ca-certificates/4110322.pem (1708 bytes)
	I1202 20:55:08.998361  744523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:55:09.013249  744523 ssh_runner.go:195] Run: openssl version
	I1202 20:55:09.019894  744523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/411032.pem && ln -fs /usr/share/ca-certificates/411032.pem /etc/ssl/certs/411032.pem"
	I1202 20:55:09.029459  744523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/411032.pem
	I1202 20:55:09.034032  744523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 20:13 /usr/share/ca-certificates/411032.pem
	I1202 20:55:09.034109  744523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/411032.pem
	I1202 20:55:09.072802  744523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/411032.pem /etc/ssl/certs/51391683.0"
	I1202 20:55:09.082258  744523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110322.pem && ln -fs /usr/share/ca-certificates/4110322.pem /etc/ssl/certs/4110322.pem"
	I1202 20:55:09.091835  744523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110322.pem
	I1202 20:55:09.096140  744523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 20:13 /usr/share/ca-certificates/4110322.pem
	I1202 20:55:09.096201  744523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110322.pem
	I1202 20:55:09.132108  744523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4110322.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:55:09.142782  744523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:55:09.153195  744523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:09.157649  744523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:09.157716  744523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:09.193211  744523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:55:09.203054  744523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:55:09.207632  744523 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 20:55:09.207698  744523 kubeadm.go:401] StartCluster: {Name:newest-cni-245604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-245604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:55:09.207798  744523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:55:09.207862  744523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:55:09.238428  744523 cri.go:89] found id: ""
	I1202 20:55:09.238499  744523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:55:09.248151  744523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 20:55:09.257704  744523 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 20:55:09.257786  744523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 20:55:09.266501  744523 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 20:55:09.266520  744523 kubeadm.go:158] found existing configuration files:
	
	I1202 20:55:09.266571  744523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 20:55:09.275990  744523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 20:55:09.276083  744523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 20:55:09.284543  744523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 20:55:09.293889  744523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 20:55:09.293983  744523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 20:55:09.302404  744523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 20:55:09.311315  744523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 20:55:09.311387  744523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 20:55:09.320016  744523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 20:55:09.329127  744523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 20:55:09.329200  744523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 20:55:09.337504  744523 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 20:55:09.450141  744523 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1202 20:55:09.526996  744523 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1202 20:55:08.905121  736301 node_ready.go:57] node "default-k8s-diff-port-997805" has "Ready":"False" status (will retry)
	W1202 20:55:11.405811  736301 node_ready.go:57] node "default-k8s-diff-port-997805" has "Ready":"False" status (will retry)
	W1202 20:55:10.546176  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	W1202 20:55:13.046772  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	I1202 20:55:13.409009  736301 node_ready.go:49] node "default-k8s-diff-port-997805" is "Ready"
	I1202 20:55:13.409043  736301 node_ready.go:38] duration metric: took 11.507409908s for node "default-k8s-diff-port-997805" to be "Ready" ...
	I1202 20:55:13.409060  736301 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:55:13.409144  736301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:55:13.428459  736301 api_server.go:72] duration metric: took 11.866557952s to wait for apiserver process to appear ...
	I1202 20:55:13.428518  736301 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:55:13.428546  736301 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1202 20:55:13.435123  736301 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1202 20:55:13.436462  736301 api_server.go:141] control plane version: v1.34.2
	I1202 20:55:13.436496  736301 api_server.go:131] duration metric: took 7.968671ms to wait for apiserver health ...
	I1202 20:55:13.436508  736301 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:55:13.441129  736301 system_pods.go:59] 8 kube-system pods found
	I1202 20:55:13.441171  736301 system_pods.go:61] "coredns-66bc5c9577-jrln7" [37de7399-6357-4f08-9240-fc9e0d884f47] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:55:13.441180  736301 system_pods.go:61] "etcd-default-k8s-diff-port-997805" [446321a2-6abf-495a-8198-da2e43d8c18d] Running
	I1202 20:55:13.441188  736301 system_pods.go:61] "kindnet-rzqpn" [eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb] Running
	I1202 20:55:13.441193  736301 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-997805" [04c7ea7b-9b4e-44ee-8148-107daccf6b7b] Running
	I1202 20:55:13.441205  736301 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-997805" [65af3257-fa45-4d8c-bab3-46c0390ab8df] Running
	I1202 20:55:13.441210  736301 system_pods.go:61] "kube-proxy-s2jpn" [407f6b3c-8d8b-47b0-b994-c061eedc6420] Running
	I1202 20:55:13.441215  736301 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-997805" [803a952f-54ba-4326-b70f-907fc7db368e] Running
	I1202 20:55:13.441222  736301 system_pods.go:61] "storage-provisioner" [08893b97-1192-4f4e-8636-8f2ba82c853d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:55:13.441235  736301 system_pods.go:74] duration metric: took 4.718273ms to wait for pod list to return data ...
	I1202 20:55:13.441248  736301 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:55:13.444692  736301 default_sa.go:45] found service account: "default"
	I1202 20:55:13.444725  736301 default_sa.go:55] duration metric: took 3.465464ms for default service account to be created ...
	I1202 20:55:13.444738  736301 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 20:55:13.449425  736301 system_pods.go:86] 8 kube-system pods found
	I1202 20:55:13.449465  736301 system_pods.go:89] "coredns-66bc5c9577-jrln7" [37de7399-6357-4f08-9240-fc9e0d884f47] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:55:13.449473  736301 system_pods.go:89] "etcd-default-k8s-diff-port-997805" [446321a2-6abf-495a-8198-da2e43d8c18d] Running
	I1202 20:55:13.449482  736301 system_pods.go:89] "kindnet-rzqpn" [eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb] Running
	I1202 20:55:13.449487  736301 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-997805" [04c7ea7b-9b4e-44ee-8148-107daccf6b7b] Running
	I1202 20:55:13.449493  736301 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-997805" [65af3257-fa45-4d8c-bab3-46c0390ab8df] Running
	I1202 20:55:13.449498  736301 system_pods.go:89] "kube-proxy-s2jpn" [407f6b3c-8d8b-47b0-b994-c061eedc6420] Running
	I1202 20:55:13.449504  736301 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-997805" [803a952f-54ba-4326-b70f-907fc7db368e] Running
	I1202 20:55:13.449512  736301 system_pods.go:89] "storage-provisioner" [08893b97-1192-4f4e-8636-8f2ba82c853d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:55:13.449542  736301 retry.go:31] will retry after 197.970445ms: missing components: kube-dns
	I1202 20:55:13.653108  736301 system_pods.go:86] 8 kube-system pods found
	I1202 20:55:13.653145  736301 system_pods.go:89] "coredns-66bc5c9577-jrln7" [37de7399-6357-4f08-9240-fc9e0d884f47] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:55:13.653161  736301 system_pods.go:89] "etcd-default-k8s-diff-port-997805" [446321a2-6abf-495a-8198-da2e43d8c18d] Running
	I1202 20:55:13.653170  736301 system_pods.go:89] "kindnet-rzqpn" [eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb] Running
	I1202 20:55:13.653176  736301 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-997805" [04c7ea7b-9b4e-44ee-8148-107daccf6b7b] Running
	I1202 20:55:13.653182  736301 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-997805" [65af3257-fa45-4d8c-bab3-46c0390ab8df] Running
	I1202 20:55:13.653187  736301 system_pods.go:89] "kube-proxy-s2jpn" [407f6b3c-8d8b-47b0-b994-c061eedc6420] Running
	I1202 20:55:13.653192  736301 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-997805" [803a952f-54ba-4326-b70f-907fc7db368e] Running
	I1202 20:55:13.653199  736301 system_pods.go:89] "storage-provisioner" [08893b97-1192-4f4e-8636-8f2ba82c853d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:55:13.653220  736301 retry.go:31] will retry after 312.600116ms: missing components: kube-dns
	I1202 20:55:13.971151  736301 system_pods.go:86] 8 kube-system pods found
	I1202 20:55:13.971209  736301 system_pods.go:89] "coredns-66bc5c9577-jrln7" [37de7399-6357-4f08-9240-fc9e0d884f47] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:55:13.971230  736301 system_pods.go:89] "etcd-default-k8s-diff-port-997805" [446321a2-6abf-495a-8198-da2e43d8c18d] Running
	I1202 20:55:13.971254  736301 system_pods.go:89] "kindnet-rzqpn" [eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb] Running
	I1202 20:55:13.971260  736301 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-997805" [04c7ea7b-9b4e-44ee-8148-107daccf6b7b] Running
	I1202 20:55:13.971266  736301 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-997805" [65af3257-fa45-4d8c-bab3-46c0390ab8df] Running
	I1202 20:55:13.971278  736301 system_pods.go:89] "kube-proxy-s2jpn" [407f6b3c-8d8b-47b0-b994-c061eedc6420] Running
	I1202 20:55:13.971283  736301 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-997805" [803a952f-54ba-4326-b70f-907fc7db368e] Running
	I1202 20:55:13.971290  736301 system_pods.go:89] "storage-provisioner" [08893b97-1192-4f4e-8636-8f2ba82c853d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:55:13.971313  736301 retry.go:31] will retry after 371.188364ms: missing components: kube-dns
	I1202 20:55:14.348015  736301 system_pods.go:86] 8 kube-system pods found
	I1202 20:55:14.348053  736301 system_pods.go:89] "coredns-66bc5c9577-jrln7" [37de7399-6357-4f08-9240-fc9e0d884f47] Running
	I1202 20:55:14.348061  736301 system_pods.go:89] "etcd-default-k8s-diff-port-997805" [446321a2-6abf-495a-8198-da2e43d8c18d] Running
	I1202 20:55:14.348080  736301 system_pods.go:89] "kindnet-rzqpn" [eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb] Running
	I1202 20:55:14.348086  736301 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-997805" [04c7ea7b-9b4e-44ee-8148-107daccf6b7b] Running
	I1202 20:55:14.348091  736301 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-997805" [65af3257-fa45-4d8c-bab3-46c0390ab8df] Running
	I1202 20:55:14.348096  736301 system_pods.go:89] "kube-proxy-s2jpn" [407f6b3c-8d8b-47b0-b994-c061eedc6420] Running
	I1202 20:55:14.348102  736301 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-997805" [803a952f-54ba-4326-b70f-907fc7db368e] Running
	I1202 20:55:14.348107  736301 system_pods.go:89] "storage-provisioner" [08893b97-1192-4f4e-8636-8f2ba82c853d] Running
	I1202 20:55:14.348118  736301 system_pods.go:126] duration metric: took 903.37182ms to wait for k8s-apps to be running ...
	I1202 20:55:14.348133  736301 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 20:55:14.348196  736301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:55:14.367188  736301 system_svc.go:56] duration metric: took 19.021039ms WaitForService to wait for kubelet
	I1202 20:55:14.367227  736301 kubeadm.go:587] duration metric: took 12.80541748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:55:14.367253  736301 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:55:14.371134  736301 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 20:55:14.371179  736301 node_conditions.go:123] node cpu capacity is 8
	I1202 20:55:14.371197  736301 node_conditions.go:105] duration metric: took 3.938624ms to run NodePressure ...
	I1202 20:55:14.371215  736301 start.go:242] waiting for startup goroutines ...
	I1202 20:55:14.371226  736301 start.go:247] waiting for cluster config update ...
	I1202 20:55:14.371254  736301 start.go:256] writing updated cluster config ...
	I1202 20:55:14.371604  736301 ssh_runner.go:195] Run: rm -f paused
	I1202 20:55:14.379210  736301 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:55:14.387240  736301 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jrln7" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:14.394276  736301 pod_ready.go:94] pod "coredns-66bc5c9577-jrln7" is "Ready"
	I1202 20:55:14.395107  736301 pod_ready.go:86] duration metric: took 7.823324ms for pod "coredns-66bc5c9577-jrln7" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:14.401910  736301 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:14.411512  736301 pod_ready.go:94] pod "etcd-default-k8s-diff-port-997805" is "Ready"
	I1202 20:55:14.411558  736301 pod_ready.go:86] duration metric: took 9.620923ms for pod "etcd-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:14.447901  736301 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:14.454179  736301 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-997805" is "Ready"
	I1202 20:55:14.454210  736301 pod_ready.go:86] duration metric: took 6.226449ms for pod "kube-apiserver-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:14.456579  736301 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:14.785098  736301 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-997805" is "Ready"
	I1202 20:55:14.785134  736301 pod_ready.go:86] duration metric: took 328.527351ms for pod "kube-controller-manager-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:14.984980  736301 pod_ready.go:83] waiting for pod "kube-proxy-s2jpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:15.384642  736301 pod_ready.go:94] pod "kube-proxy-s2jpn" is "Ready"
	I1202 20:55:15.384681  736301 pod_ready.go:86] duration metric: took 399.670012ms for pod "kube-proxy-s2jpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:15.584889  736301 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:15.984320  736301 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-997805" is "Ready"
	I1202 20:55:15.984358  736301 pod_ready.go:86] duration metric: took 399.436392ms for pod "kube-scheduler-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:15.984380  736301 pod_ready.go:40] duration metric: took 1.605130751s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:55:16.047890  736301 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 20:55:16.050340  736301 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-997805" cluster and "default" namespace by default
	I1202 20:55:17.727885  744523 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 20:55:17.727980  744523 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 20:55:17.728169  744523 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 20:55:17.728253  744523 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1202 20:55:17.728332  744523 kubeadm.go:319] OS: Linux
	I1202 20:55:17.728410  744523 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 20:55:17.728482  744523 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 20:55:17.728547  744523 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 20:55:17.728622  744523 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 20:55:17.728690  744523 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 20:55:17.728761  744523 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 20:55:17.728820  744523 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 20:55:17.728871  744523 kubeadm.go:319] CGROUPS_IO: enabled
	I1202 20:55:17.728957  744523 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 20:55:17.729110  744523 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 20:55:17.729262  744523 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 20:55:17.729355  744523 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 20:55:17.732315  744523 out.go:252]   - Generating certificates and keys ...
	I1202 20:55:17.732442  744523 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 20:55:17.732545  744523 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 20:55:17.732644  744523 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 20:55:17.732784  744523 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 20:55:17.732890  744523 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 20:55:17.732967  744523 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 20:55:17.733023  744523 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 20:55:17.733202  744523 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-245604] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1202 20:55:17.733257  744523 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 20:55:17.733428  744523 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-245604] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1202 20:55:17.733527  744523 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 20:55:17.733635  744523 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 20:55:17.733710  744523 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 20:55:17.733807  744523 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 20:55:17.733859  744523 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 20:55:17.733921  744523 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 20:55:17.734012  744523 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 20:55:17.734172  744523 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 20:55:17.734269  744523 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 20:55:17.734394  744523 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 20:55:17.734481  744523 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 20:55:17.736342  744523 out.go:252]   - Booting up control plane ...
	I1202 20:55:17.736480  744523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 20:55:17.736596  744523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 20:55:17.736693  744523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 20:55:17.736872  744523 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 20:55:17.737043  744523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 20:55:17.737243  744523 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 20:55:17.737358  744523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 20:55:17.737420  744523 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 20:55:17.737602  744523 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 20:55:17.737768  744523 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 20:55:17.737849  744523 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.139829ms
	I1202 20:55:17.737973  744523 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 20:55:17.738135  744523 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1202 20:55:17.738271  744523 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 20:55:17.738383  744523 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1202 20:55:17.738463  744523 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.006451763s
	I1202 20:55:17.738554  744523 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.25008497s
	I1202 20:55:17.738650  744523 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001931401s
	I1202 20:55:17.738800  744523 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 20:55:17.738981  744523 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 20:55:17.739041  744523 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 20:55:17.739329  744523 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-245604 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 20:55:17.739413  744523 kubeadm.go:319] [bootstrap-token] Using token: 7nkj4u.5737xh7thqz8h9m6
	I1202 20:55:17.744569  744523 out.go:252]   - Configuring RBAC rules ...
	I1202 20:55:17.744713  744523 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 20:55:17.744844  744523 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 20:55:17.745106  744523 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 20:55:17.745296  744523 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 20:55:17.745457  744523 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 20:55:17.745579  744523 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 20:55:17.745744  744523 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 20:55:17.745800  744523 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 20:55:17.745866  744523 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 20:55:17.745876  744523 kubeadm.go:319] 
	I1202 20:55:17.745958  744523 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 20:55:17.745967  744523 kubeadm.go:319] 
	I1202 20:55:17.746089  744523 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 20:55:17.746100  744523 kubeadm.go:319] 
	I1202 20:55:17.746133  744523 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 20:55:17.746211  744523 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 20:55:17.746288  744523 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 20:55:17.746297  744523 kubeadm.go:319] 
	I1202 20:55:17.746370  744523 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 20:55:17.746380  744523 kubeadm.go:319] 
	I1202 20:55:17.746447  744523 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 20:55:17.746456  744523 kubeadm.go:319] 
	I1202 20:55:17.746526  744523 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 20:55:17.746642  744523 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 20:55:17.746741  744523 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 20:55:17.746752  744523 kubeadm.go:319] 
	I1202 20:55:17.746853  744523 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 20:55:17.746921  744523 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 20:55:17.746927  744523 kubeadm.go:319] 
	I1202 20:55:17.746996  744523 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 7nkj4u.5737xh7thqz8h9m6 \
	I1202 20:55:17.747105  744523 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:48349ffa2369b87cd15480ece56edb1c5c5d97a828930f9dfbf1d1ceccf80ff4 \
	I1202 20:55:17.747126  744523 kubeadm.go:319] 	--control-plane 
	I1202 20:55:17.747130  744523 kubeadm.go:319] 
	I1202 20:55:17.747200  744523 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 20:55:17.747206  744523 kubeadm.go:319] 
	I1202 20:55:17.747278  744523 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 7nkj4u.5737xh7thqz8h9m6 \
	I1202 20:55:17.747387  744523 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:48349ffa2369b87cd15480ece56edb1c5c5d97a828930f9dfbf1d1ceccf80ff4 
	I1202 20:55:17.747398  744523 cni.go:84] Creating CNI manager for ""
	I1202 20:55:17.747411  744523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:17.749351  744523 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1202 20:55:15.046995  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	W1202 20:55:17.058635  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	I1202 20:55:17.750855  744523 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 20:55:17.756405  744523 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1202 20:55:17.756429  744523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 20:55:17.773111  744523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 20:55:18.053229  744523 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 20:55:18.053294  744523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:18.053309  744523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-245604 minikube.k8s.io/updated_at=2025_12_02T20_55_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92 minikube.k8s.io/name=newest-cni-245604 minikube.k8s.io/primary=true
	I1202 20:55:18.147980  744523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:18.174749  744523 ops.go:34] apiserver oom_adj: -16
	I1202 20:55:18.649089  744523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:19.148197  744523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:19.648300  744523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:20.148460  744523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:20.648917  744523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:21.148125  744523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:21.648136  744523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:22.148743  744523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:55:22.219200  744523 kubeadm.go:1114] duration metric: took 4.165964295s to wait for elevateKubeSystemPrivileges
	I1202 20:55:22.219239  744523 kubeadm.go:403] duration metric: took 13.011548887s to StartCluster
	I1202 20:55:22.219285  744523 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:22.219363  744523 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:22.220725  744523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:22.220981  744523 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:55:22.221022  744523 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:55:22.220994  744523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 20:55:22.221146  744523 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-245604"
	I1202 20:55:22.221163  744523 addons.go:70] Setting default-storageclass=true in profile "newest-cni-245604"
	I1202 20:55:22.221199  744523 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-245604"
	I1202 20:55:22.221169  744523 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-245604"
	I1202 20:55:22.221263  744523 config.go:182] Loaded profile config "newest-cni-245604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:55:22.221301  744523 host.go:66] Checking if "newest-cni-245604" exists ...
	I1202 20:55:22.221566  744523 cli_runner.go:164] Run: docker container inspect newest-cni-245604 --format={{.State.Status}}
	I1202 20:55:22.221873  744523 cli_runner.go:164] Run: docker container inspect newest-cni-245604 --format={{.State.Status}}
	I1202 20:55:22.222714  744523 out.go:179] * Verifying Kubernetes components...
	I1202 20:55:22.223874  744523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:22.247282  744523 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:55:22.248019  744523 addons.go:239] Setting addon default-storageclass=true in "newest-cni-245604"
	I1202 20:55:22.248083  744523 host.go:66] Checking if "newest-cni-245604" exists ...
	I1202 20:55:22.248524  744523 cli_runner.go:164] Run: docker container inspect newest-cni-245604 --format={{.State.Status}}
	I1202 20:55:22.248538  744523 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:22.248557  744523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:55:22.248628  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:55:22.279723  744523 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:22.279751  744523 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:55:22.279826  744523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:55:22.281773  744523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:55:22.304309  744523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:55:22.316682  744523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 20:55:22.364395  744523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:55:22.401050  744523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:22.421315  744523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:22.484689  744523 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1202 20:55:22.486166  744523 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:55:22.486229  744523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:55:22.792266  744523 api_server.go:72] duration metric: took 571.24928ms to wait for apiserver process to appear ...
	I1202 20:55:22.792298  744523 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:55:22.792322  744523 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 20:55:22.797807  744523 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1202 20:55:22.798870  744523 api_server.go:141] control plane version: v1.35.0-beta.0
	I1202 20:55:22.798898  744523 api_server.go:131] duration metric: took 6.592941ms to wait for apiserver health ...
	I1202 20:55:22.798907  744523 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:55:22.799764  744523 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1202 20:55:22.801176  744523 addons.go:530] duration metric: took 580.159704ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1202 20:55:22.802447  744523 system_pods.go:59] 8 kube-system pods found
	I1202 20:55:22.802480  744523 system_pods.go:61] "coredns-7d764666f9-blfz2" [431846c2-b261-4ac9-ae34-f5e7c9bd7c30] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1202 20:55:22.802489  744523 system_pods.go:61] "etcd-newest-cni-245604" [0153ab66-c89e-4cb9-956f-af095ae01a6d] Running
	I1202 20:55:22.802501  744523 system_pods.go:61] "kindnet-flbpz" [5931b461-203e-4906-9cb7-0a7ddcf9c5ae] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 20:55:22.802512  744523 system_pods.go:61] "kube-apiserver-newest-cni-245604" [aedbda6a-d95b-4616-9c31-4931593df7d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:55:22.802521  744523 system_pods.go:61] "kube-controller-manager-newest-cni-245604" [f659dbd1-c031-4078-a1e3-e75ac74f2ea4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:55:22.802530  744523 system_pods.go:61] "kube-proxy-khm6s" [990486ba-3da5-4666-b441-52e3fcc4c81f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 20:55:22.802536  744523 system_pods.go:61] "kube-scheduler-newest-cni-245604" [652fff1e-9b61-4947-a077-8f039064ad96] Running
	I1202 20:55:22.802551  744523 system_pods.go:61] "storage-provisioner" [6eb8872b-114f-434c-b0ca-a8eaa4c5da9e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1202 20:55:22.802561  744523 system_pods.go:74] duration metric: took 3.647122ms to wait for pod list to return data ...
	I1202 20:55:22.802576  744523 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:55:22.805280  744523 default_sa.go:45] found service account: "default"
	I1202 20:55:22.805300  744523 default_sa.go:55] duration metric: took 2.718099ms for default service account to be created ...
	I1202 20:55:22.805312  744523 kubeadm.go:587] duration metric: took 584.301651ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 20:55:22.805329  744523 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:55:22.807801  744523 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 20:55:22.807824  744523 node_conditions.go:123] node cpu capacity is 8
	I1202 20:55:22.807840  744523 node_conditions.go:105] duration metric: took 2.507559ms to run NodePressure ...
	I1202 20:55:22.807853  744523 start.go:242] waiting for startup goroutines ...
	I1202 20:55:22.989790  744523 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-245604" context rescaled to 1 replicas
	I1202 20:55:22.989832  744523 start.go:247] waiting for cluster config update ...
	I1202 20:55:22.989848  744523 start.go:256] writing updated cluster config ...
	I1202 20:55:22.990236  744523 ssh_runner.go:195] Run: rm -f paused
	I1202 20:55:23.046430  744523 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1202 20:55:23.049142  744523 out.go:179] * Done! kubectl is now configured to use "newest-cni-245604" cluster and "default" namespace by default
	W1202 20:55:19.545822  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	W1202 20:55:22.046695  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 02 20:55:13 default-k8s-diff-port-997805 crio[780]: time="2025-12-02T20:55:13.354882755Z" level=info msg="Starting container: 77cd2dfe9a908a07fb9a6dc6e15a960358fc62ab0948843e050b29bdd5055b0b" id=91337537-a0ba-4f55-b00a-a63172da3c5c name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:55:13 default-k8s-diff-port-997805 crio[780]: time="2025-12-02T20:55:13.360474427Z" level=info msg="Started container" PID=1833 containerID=77cd2dfe9a908a07fb9a6dc6e15a960358fc62ab0948843e050b29bdd5055b0b description=kube-system/coredns-66bc5c9577-jrln7/coredns id=91337537-a0ba-4f55-b00a-a63172da3c5c name=/runtime.v1.RuntimeService/StartContainer sandboxID=a11114fe331e9b3c2e459bb6fe89530516761e26d652b46b2ade2f7d65165664
	Dec 02 20:55:16 default-k8s-diff-port-997805 crio[780]: time="2025-12-02T20:55:16.581302269Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7232b644-0164-437e-803a-c3a9dc6457a3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 20:55:16 default-k8s-diff-port-997805 crio[780]: time="2025-12-02T20:55:16.581406626Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:16 default-k8s-diff-port-997805 crio[780]: time="2025-12-02T20:55:16.595275504Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:dedc750798ce86276b840366e5038a877eb98fb8eeda4490954b0ed264a372a1 UID:b5b6709a-d731-4be3-a6d0-ecbcb3655de4 NetNS:/var/run/netns/6cd56184-3ce3-4c99-a736-23b5afa47a66 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008aac8}] Aliases:map[]}"
	Dec 02 20:55:16 default-k8s-diff-port-997805 crio[780]: time="2025-12-02T20:55:16.595322629Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 02 20:55:16 default-k8s-diff-port-997805 crio[780]: time="2025-12-02T20:55:16.61394259Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:dedc750798ce86276b840366e5038a877eb98fb8eeda4490954b0ed264a372a1 UID:b5b6709a-d731-4be3-a6d0-ecbcb3655de4 NetNS:/var/run/netns/6cd56184-3ce3-4c99-a736-23b5afa47a66 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008aac8}] Aliases:map[]}"
	Dec 02 20:55:16 default-k8s-diff-port-997805 crio[780]: time="2025-12-02T20:55:16.61414966Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 02 20:55:16 default-k8s-diff-port-997805 crio[780]: time="2025-12-02T20:55:16.617982556Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 20:55:16 default-k8s-diff-port-997805 crio[780]: time="2025-12-02T20:55:16.6204921Z" level=info msg="Ran pod sandbox dedc750798ce86276b840366e5038a877eb98fb8eeda4490954b0ed264a372a1 with infra container: default/busybox/POD" id=7232b644-0164-437e-803a-c3a9dc6457a3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 20:55:16 default-k8s-diff-port-997805 crio[780]: time="2025-12-02T20:55:16.622467437Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a87aaea5-ffe9-4a13-9a52-80f7e8d160a0 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:16 default-k8s-diff-port-997805 crio[780]: time="2025-12-02T20:55:16.622648543Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a87aaea5-ffe9-4a13-9a52-80f7e8d160a0 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:16 default-k8s-diff-port-997805 crio[780]: time="2025-12-02T20:55:16.622705443Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=a87aaea5-ffe9-4a13-9a52-80f7e8d160a0 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:16 default-k8s-diff-port-997805 crio[780]: time="2025-12-02T20:55:16.623963439Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=29710c0d-3818-4add-a0d6-fe3ed7070bdb name=/runtime.v1.ImageService/PullImage
	Dec 02 20:55:16 default-k8s-diff-port-997805 crio[780]: time="2025-12-02T20:55:16.626713974Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 02 20:55:18 default-k8s-diff-port-997805 crio[780]: time="2025-12-02T20:55:18.682825131Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=29710c0d-3818-4add-a0d6-fe3ed7070bdb name=/runtime.v1.ImageService/PullImage
	Dec 02 20:55:18 default-k8s-diff-port-997805 crio[780]: time="2025-12-02T20:55:18.683832134Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d932581a-dc07-4f3d-8f9e-4c2b4dfde0d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:18 default-k8s-diff-port-997805 crio[780]: time="2025-12-02T20:55:18.685482424Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=91e28d88-c7c2-4829-a849-7933734d742f name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:18 default-k8s-diff-port-997805 crio[780]: time="2025-12-02T20:55:18.690113584Z" level=info msg="Creating container: default/busybox/busybox" id=2b1d868f-43bf-42e8-b253-1c71f2c59b6a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:55:18 default-k8s-diff-port-997805 crio[780]: time="2025-12-02T20:55:18.690267075Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:18 default-k8s-diff-port-997805 crio[780]: time="2025-12-02T20:55:18.695487406Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:18 default-k8s-diff-port-997805 crio[780]: time="2025-12-02T20:55:18.69610381Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:18 default-k8s-diff-port-997805 crio[780]: time="2025-12-02T20:55:18.738566052Z" level=info msg="Created container 1cbb3433db5fda93e8e37416a961759ca415a1354c181aa5a9f399ed72532ca7: default/busybox/busybox" id=2b1d868f-43bf-42e8-b253-1c71f2c59b6a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:55:18 default-k8s-diff-port-997805 crio[780]: time="2025-12-02T20:55:18.739361221Z" level=info msg="Starting container: 1cbb3433db5fda93e8e37416a961759ca415a1354c181aa5a9f399ed72532ca7" id=e6415334-5ac2-4801-81a3-cd529fbe00ac name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:55:18 default-k8s-diff-port-997805 crio[780]: time="2025-12-02T20:55:18.741113431Z" level=info msg="Started container" PID=1908 containerID=1cbb3433db5fda93e8e37416a961759ca415a1354c181aa5a9f399ed72532ca7 description=default/busybox/busybox id=e6415334-5ac2-4801-81a3-cd529fbe00ac name=/runtime.v1.RuntimeService/StartContainer sandboxID=dedc750798ce86276b840366e5038a877eb98fb8eeda4490954b0ed264a372a1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	1cbb3433db5fd       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   dedc750798ce8       busybox                                                default
	77cd2dfe9a908       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   a11114fe331e9       coredns-66bc5c9577-jrln7                               kube-system
	f54a92cbe0469       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   28992bdcb7c2d       storage-provisioner                                    kube-system
	bdccb296f26d1       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      23 seconds ago      Running             kube-proxy                0                   8b042281fa3f4       kube-proxy-s2jpn                                       kube-system
	29f6bc8e63585       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   9a86ca546f38d       kindnet-rzqpn                                          kube-system
	d10cd0e59c659       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      34 seconds ago      Running             kube-controller-manager   0                   3746e0d0eadf1       kube-controller-manager-default-k8s-diff-port-997805   kube-system
	2f77aed62b591       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      34 seconds ago      Running             kube-apiserver            0                   25b939136c855       kube-apiserver-default-k8s-diff-port-997805            kube-system
	13f327e0971f3       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      34 seconds ago      Running             etcd                      0                   dba3abf6ff786       etcd-default-k8s-diff-port-997805                      kube-system
	61deaac8c9479       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      34 seconds ago      Running             kube-scheduler            0                   6fa65f2a462ba       kube-scheduler-default-k8s-diff-port-997805            kube-system
	
	
	==> coredns [77cd2dfe9a908a07fb9a6dc6e15a960358fc62ab0948843e050b29bdd5055b0b] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38278 - 59374 "HINFO IN 5918253096244858798.2917750958228613047. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018796067s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-997805
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-997805
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=default-k8s-diff-port-997805
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T20_54_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 20:54:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-997805
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:55:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:55:12 +0000   Tue, 02 Dec 2025 20:54:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:55:12 +0000   Tue, 02 Dec 2025 20:54:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:55:12 +0000   Tue, 02 Dec 2025 20:54:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:55:12 +0000   Tue, 02 Dec 2025 20:55:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-997805
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                4d0fe763-c364-4b9d-a9b2-5ea428409eed
	  Boot ID:                    9dd5456f-e394-4b4b-9458-48d26faf507e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-jrln7                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-default-k8s-diff-port-997805                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-rzqpn                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-default-k8s-diff-port-997805             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-997805    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-s2jpn                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-default-k8s-diff-port-997805             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node default-k8s-diff-port-997805 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node default-k8s-diff-port-997805 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node default-k8s-diff-port-997805 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node default-k8s-diff-port-997805 event: Registered Node default-k8s-diff-port-997805 in Controller
	  Normal  NodeReady                14s   kubelet          Node default-k8s-diff-port-997805 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[ +32.254238] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[Dec 2 20:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 03 bd 14 45 8a 08 06
	[  +0.000590] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 96 27 ad 0d 40 04 08 06
	[Dec 2 20:53] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	[  +0.000700] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 e4 ba c0 78 5f 08 06
	[ +10.119645] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000022] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[  +2.447166] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 df 09 53 d6 6e 08 06
	[  +0.000374] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 8d 06 71 0a 5e 08 06
	[Dec 2 20:54] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 12 47 13 50 f6 bc 08 06
	[  +0.001523] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[ +22.123549] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 0d 45 06 42 2a 08 06
	[  +0.000389] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	
	
	==> etcd [13f327e0971f35bf66ac82b7dd025d8a27a7143320433abd6fae3ad2e4b23758] <==
	{"level":"warn","ts":"2025-12-02T20:54:52.750972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:54:52.761575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:54:52.771462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:54:52.780128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:54:52.788917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:54:52.798294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:54:52.807405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:54:52.824259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:54:52.839003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:54:52.849970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:54:52.858996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:54:52.868868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:54:52.878376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:54:52.887572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:54:52.898319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:54:52.907815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:54:52.917256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:54:52.926126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:54:52.941612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:54:52.947484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:54:52.957326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:54:52.967285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:54:53.044827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:06.526521Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.153966ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-997805\" limit:1 ","response":"range_response_count:1 size:5662"}
	{"level":"info","ts":"2025-12-02T20:55:06.526601Z","caller":"traceutil/trace.go:172","msg":"trace[269122038] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-997805; range_end:; response_count:1; response_revision:387; }","duration":"123.265831ms","start":"2025-12-02T20:55:06.403319Z","end":"2025-12-02T20:55:06.526585Z","steps":["trace[269122038] 'range keys from in-memory index tree'  (duration: 123.009695ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:55:26 up  2:37,  0 user,  load average: 5.96, 4.16, 2.63
	Linux default-k8s-diff-port-997805 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [29f6bc8e635854d5b27f17f7062c8d36b8aba50230bef73344d255abfd5249d7] <==
	I1202 20:55:02.405469       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 20:55:02.405778       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1202 20:55:02.405970       1 main.go:148] setting mtu 1500 for CNI 
	I1202 20:55:02.405990       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 20:55:02.406035       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T20:55:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 20:55:02.704541       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 20:55:02.704958       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 20:55:02.705052       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 20:55:02.705305       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 20:55:03.206051       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 20:55:03.206108       1 metrics.go:72] Registering metrics
	I1202 20:55:03.206236       1 controller.go:711] "Syncing nftables rules"
	I1202 20:55:12.705381       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 20:55:12.705449       1 main.go:301] handling current node
	I1202 20:55:22.706228       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 20:55:22.706285       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2f77aed62b591d2a1e65e9b7395eefae6f4379edd1b63fe36da71be2db7fdf86] <==
	I1202 20:54:53.659613       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1202 20:54:53.659038       1 cache.go:39] Caches are synced for autoregister controller
	I1202 20:54:53.660710       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 20:54:53.668654       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1202 20:54:53.673934       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:54:53.681343       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1202 20:54:53.681932       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:54:53.853900       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 20:54:54.563679       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1202 20:54:54.568497       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1202 20:54:54.568518       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 20:54:55.283501       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 20:54:55.337199       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 20:54:55.469537       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1202 20:54:55.483766       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1202 20:54:55.485487       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 20:54:55.498182       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 20:54:55.598738       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 20:54:56.268549       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 20:54:56.281599       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1202 20:54:56.291338       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 20:55:00.605749       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:55:00.612159       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:55:01.300797       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1202 20:55:01.701340       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [d10cd0e59c659262d44c1501cc442b11311ecd604338dc021d330d8b790c2771] <==
	I1202 20:55:00.614306       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1202 20:55:00.614678       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-997805" podCIDRs=["10.244.0.0/24"]
	I1202 20:55:00.619703       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1202 20:55:00.622289       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1202 20:55:00.630866       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1202 20:55:00.631547       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 20:55:00.645977       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1202 20:55:00.647138       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1202 20:55:00.647143       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1202 20:55:00.647154       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1202 20:55:00.648141       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1202 20:55:00.648197       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1202 20:55:00.648310       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1202 20:55:00.648316       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1202 20:55:00.648431       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 20:55:00.648439       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 20:55:00.648447       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 20:55:00.648318       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1202 20:55:00.648903       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1202 20:55:00.648933       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1202 20:55:00.650167       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1202 20:55:00.651375       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1202 20:55:00.653575       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 20:55:00.660595       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 20:55:15.606308       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [bdccb296f26d1b7f241624f61ec7476d39bdf54942d015433bdd31ab34a4a184] <==
	I1202 20:55:02.331020       1 server_linux.go:53] "Using iptables proxy"
	I1202 20:55:02.415586       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 20:55:02.516480       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 20:55:02.516520       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1202 20:55:02.516680       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 20:55:02.536761       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 20:55:02.536817       1 server_linux.go:132] "Using iptables Proxier"
	I1202 20:55:02.542949       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 20:55:02.543460       1 server.go:527] "Version info" version="v1.34.2"
	I1202 20:55:02.543602       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:55:02.545816       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 20:55:02.545911       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 20:55:02.546005       1 config.go:309] "Starting node config controller"
	I1202 20:55:02.546018       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 20:55:02.546025       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 20:55:02.546151       1 config.go:106] "Starting endpoint slice config controller"
	I1202 20:55:02.546210       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 20:55:02.546699       1 config.go:200] "Starting service config controller"
	I1202 20:55:02.546750       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 20:55:02.646148       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 20:55:02.647535       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 20:55:02.647543       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [61deaac8c9479e93eeb79b8947c1adc3256e642c0da8cfbb2b9d08e12a88ac13] <==
	E1202 20:54:53.620763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 20:54:53.620821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 20:54:53.620834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 20:54:53.620871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 20:54:53.620909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 20:54:53.620925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 20:54:53.620569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 20:54:53.621013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 20:54:54.465228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 20:54:54.489266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 20:54:54.490040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 20:54:54.526197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 20:54:54.643111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 20:54:54.655289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1202 20:54:54.662604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 20:54:54.674816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 20:54:54.744311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 20:54:54.751775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 20:54:54.887542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 20:54:54.943491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 20:54:54.979821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 20:54:54.979810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 20:54:55.003198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 20:54:55.003198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1202 20:54:56.615273       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 20:55:01 default-k8s-diff-port-997805 kubelet[1322]: I1202 20:55:01.357302    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/407f6b3c-8d8b-47b0-b994-c061eedc6420-lib-modules\") pod \"kube-proxy-s2jpn\" (UID: \"407f6b3c-8d8b-47b0-b994-c061eedc6420\") " pod="kube-system/kube-proxy-s2jpn"
	Dec 02 20:55:01 default-k8s-diff-port-997805 kubelet[1322]: I1202 20:55:01.357360    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb-cni-cfg\") pod \"kindnet-rzqpn\" (UID: \"eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb\") " pod="kube-system/kindnet-rzqpn"
	Dec 02 20:55:01 default-k8s-diff-port-997805 kubelet[1322]: I1202 20:55:01.357390    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/407f6b3c-8d8b-47b0-b994-c061eedc6420-xtables-lock\") pod \"kube-proxy-s2jpn\" (UID: \"407f6b3c-8d8b-47b0-b994-c061eedc6420\") " pod="kube-system/kube-proxy-s2jpn"
	Dec 02 20:55:01 default-k8s-diff-port-997805 kubelet[1322]: I1202 20:55:01.357417    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb-xtables-lock\") pod \"kindnet-rzqpn\" (UID: \"eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb\") " pod="kube-system/kindnet-rzqpn"
	Dec 02 20:55:01 default-k8s-diff-port-997805 kubelet[1322]: I1202 20:55:01.357440    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb-lib-modules\") pod \"kindnet-rzqpn\" (UID: \"eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb\") " pod="kube-system/kindnet-rzqpn"
	Dec 02 20:55:01 default-k8s-diff-port-997805 kubelet[1322]: I1202 20:55:01.357465    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqrw7\" (UniqueName: \"kubernetes.io/projected/eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb-kube-api-access-mqrw7\") pod \"kindnet-rzqpn\" (UID: \"eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb\") " pod="kube-system/kindnet-rzqpn"
	Dec 02 20:55:01 default-k8s-diff-port-997805 kubelet[1322]: I1202 20:55:01.357493    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/407f6b3c-8d8b-47b0-b994-c061eedc6420-kube-proxy\") pod \"kube-proxy-s2jpn\" (UID: \"407f6b3c-8d8b-47b0-b994-c061eedc6420\") " pod="kube-system/kube-proxy-s2jpn"
	Dec 02 20:55:01 default-k8s-diff-port-997805 kubelet[1322]: I1202 20:55:01.357514    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drsqh\" (UniqueName: \"kubernetes.io/projected/407f6b3c-8d8b-47b0-b994-c061eedc6420-kube-api-access-drsqh\") pod \"kube-proxy-s2jpn\" (UID: \"407f6b3c-8d8b-47b0-b994-c061eedc6420\") " pod="kube-system/kube-proxy-s2jpn"
	Dec 02 20:55:01 default-k8s-diff-port-997805 kubelet[1322]: E1202 20:55:01.465412    1322 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 02 20:55:01 default-k8s-diff-port-997805 kubelet[1322]: E1202 20:55:01.465456    1322 projected.go:196] Error preparing data for projected volume kube-api-access-drsqh for pod kube-system/kube-proxy-s2jpn: configmap "kube-root-ca.crt" not found
	Dec 02 20:55:01 default-k8s-diff-port-997805 kubelet[1322]: E1202 20:55:01.465561    1322 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/407f6b3c-8d8b-47b0-b994-c061eedc6420-kube-api-access-drsqh podName:407f6b3c-8d8b-47b0-b994-c061eedc6420 nodeName:}" failed. No retries permitted until 2025-12-02 20:55:01.96552662 +0000 UTC m=+5.931770645 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-drsqh" (UniqueName: "kubernetes.io/projected/407f6b3c-8d8b-47b0-b994-c061eedc6420-kube-api-access-drsqh") pod "kube-proxy-s2jpn" (UID: "407f6b3c-8d8b-47b0-b994-c061eedc6420") : configmap "kube-root-ca.crt" not found
	Dec 02 20:55:01 default-k8s-diff-port-997805 kubelet[1322]: E1202 20:55:01.466281    1322 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 02 20:55:01 default-k8s-diff-port-997805 kubelet[1322]: E1202 20:55:01.466310    1322 projected.go:196] Error preparing data for projected volume kube-api-access-mqrw7 for pod kube-system/kindnet-rzqpn: configmap "kube-root-ca.crt" not found
	Dec 02 20:55:01 default-k8s-diff-port-997805 kubelet[1322]: E1202 20:55:01.466379    1322 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb-kube-api-access-mqrw7 podName:eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb nodeName:}" failed. No retries permitted until 2025-12-02 20:55:01.966357132 +0000 UTC m=+5.932601155 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mqrw7" (UniqueName: "kubernetes.io/projected/eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb-kube-api-access-mqrw7") pod "kindnet-rzqpn" (UID: "eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb") : configmap "kube-root-ca.crt" not found
	Dec 02 20:55:03 default-k8s-diff-port-997805 kubelet[1322]: I1202 20:55:03.289653    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s2jpn" podStartSLOduration=2.2896289420000002 podStartE2EDuration="2.289628942s" podCreationTimestamp="2025-12-02 20:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:55:03.289585191 +0000 UTC m=+7.255829216" watchObservedRunningTime="2025-12-02 20:55:03.289628942 +0000 UTC m=+7.255872968"
	Dec 02 20:55:03 default-k8s-diff-port-997805 kubelet[1322]: I1202 20:55:03.289789    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-rzqpn" podStartSLOduration=2.289781406 podStartE2EDuration="2.289781406s" podCreationTimestamp="2025-12-02 20:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:55:03.202849398 +0000 UTC m=+7.169093424" watchObservedRunningTime="2025-12-02 20:55:03.289781406 +0000 UTC m=+7.256025432"
	Dec 02 20:55:12 default-k8s-diff-port-997805 kubelet[1322]: I1202 20:55:12.934899    1322 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 02 20:55:13 default-k8s-diff-port-997805 kubelet[1322]: I1202 20:55:13.040884    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-926t4\" (UniqueName: \"kubernetes.io/projected/37de7399-6357-4f08-9240-fc9e0d884f47-kube-api-access-926t4\") pod \"coredns-66bc5c9577-jrln7\" (UID: \"37de7399-6357-4f08-9240-fc9e0d884f47\") " pod="kube-system/coredns-66bc5c9577-jrln7"
	Dec 02 20:55:13 default-k8s-diff-port-997805 kubelet[1322]: I1202 20:55:13.040935    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37de7399-6357-4f08-9240-fc9e0d884f47-config-volume\") pod \"coredns-66bc5c9577-jrln7\" (UID: \"37de7399-6357-4f08-9240-fc9e0d884f47\") " pod="kube-system/coredns-66bc5c9577-jrln7"
	Dec 02 20:55:13 default-k8s-diff-port-997805 kubelet[1322]: I1202 20:55:13.040979    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/08893b97-1192-4f4e-8636-8f2ba82c853d-tmp\") pod \"storage-provisioner\" (UID: \"08893b97-1192-4f4e-8636-8f2ba82c853d\") " pod="kube-system/storage-provisioner"
	Dec 02 20:55:13 default-k8s-diff-port-997805 kubelet[1322]: I1202 20:55:13.041002    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dh2s\" (UniqueName: \"kubernetes.io/projected/08893b97-1192-4f4e-8636-8f2ba82c853d-kube-api-access-4dh2s\") pod \"storage-provisioner\" (UID: \"08893b97-1192-4f4e-8636-8f2ba82c853d\") " pod="kube-system/storage-provisioner"
	Dec 02 20:55:14 default-k8s-diff-port-997805 kubelet[1322]: I1202 20:55:14.245960    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.245937135 podStartE2EDuration="12.245937135s" podCreationTimestamp="2025-12-02 20:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:55:14.245577814 +0000 UTC m=+18.211821842" watchObservedRunningTime="2025-12-02 20:55:14.245937135 +0000 UTC m=+18.212181162"
	Dec 02 20:55:14 default-k8s-diff-port-997805 kubelet[1322]: I1202 20:55:14.246085    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jrln7" podStartSLOduration=13.246063586 podStartE2EDuration="13.246063586s" podCreationTimestamp="2025-12-02 20:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:55:14.230586702 +0000 UTC m=+18.196830727" watchObservedRunningTime="2025-12-02 20:55:14.246063586 +0000 UTC m=+18.212307612"
	Dec 02 20:55:16 default-k8s-diff-port-997805 kubelet[1322]: I1202 20:55:16.365263    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfxl8\" (UniqueName: \"kubernetes.io/projected/b5b6709a-d731-4be3-a6d0-ecbcb3655de4-kube-api-access-hfxl8\") pod \"busybox\" (UID: \"b5b6709a-d731-4be3-a6d0-ecbcb3655de4\") " pod="default/busybox"
	Dec 02 20:55:19 default-k8s-diff-port-997805 kubelet[1322]: I1202 20:55:19.244480    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.182759235 podStartE2EDuration="3.244456185s" podCreationTimestamp="2025-12-02 20:55:16 +0000 UTC" firstStartedPulling="2025-12-02 20:55:16.623124502 +0000 UTC m=+20.589368520" lastFinishedPulling="2025-12-02 20:55:18.684821445 +0000 UTC m=+22.651065470" observedRunningTime="2025-12-02 20:55:19.244024878 +0000 UTC m=+23.210268901" watchObservedRunningTime="2025-12-02 20:55:19.244456185 +0000 UTC m=+23.210700211"
	
	
	==> storage-provisioner [f54a92cbe04692ba2160d73e30da11f079af3e2328bfb31ab9507fa3b6122483] <==
	I1202 20:55:13.354568       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 20:55:13.365664       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 20:55:13.365904       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1202 20:55:13.370030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:55:13.377239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 20:55:13.377605       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 20:55:13.377841       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-997805_c51c2e03-3145-45aa-98c7-394eb48045be!
	I1202 20:55:13.377747       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"630e2ab7-c763-4f65-86eb-788c49314bcc", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-997805_c51c2e03-3145-45aa-98c7-394eb48045be became leader
	W1202 20:55:13.385816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:55:13.390607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 20:55:13.479148       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-997805_c51c2e03-3145-45aa-98c7-394eb48045be!
	W1202 20:55:15.396495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:55:15.401521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:55:17.405802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:55:17.412606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:55:19.415686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:55:19.421004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:55:21.424809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:55:21.429368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:55:23.434165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:55:23.439468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:55:25.444061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:55:25.448934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-997805 -n default-k8s-diff-port-997805
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-997805 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-245604 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-245604 --alsologtostderr -v=1: exit status 80 (1.710798538s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-245604 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 20:55:40.309777  758179 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:55:40.310205  758179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:55:40.310220  758179 out.go:374] Setting ErrFile to fd 2...
	I1202 20:55:40.310226  758179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:55:40.310588  758179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:55:40.310970  758179 out.go:368] Setting JSON to false
	I1202 20:55:40.310999  758179 mustload.go:66] Loading cluster: newest-cni-245604
	I1202 20:55:40.311582  758179 config.go:182] Loaded profile config "newest-cni-245604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:55:40.312178  758179 cli_runner.go:164] Run: docker container inspect newest-cni-245604 --format={{.State.Status}}
	I1202 20:55:40.338924  758179 host.go:66] Checking if "newest-cni-245604" exists ...
	I1202 20:55:40.339827  758179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:55:40.433022  758179 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:88 SystemTime:2025-12-02 20:55:40.416394862 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:55:40.434389  758179 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-245604 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1202 20:55:40.435962  758179 out.go:179] * Pausing node newest-cni-245604 ... 
	I1202 20:55:40.437146  758179 host.go:66] Checking if "newest-cni-245604" exists ...
	I1202 20:55:40.437428  758179 ssh_runner.go:195] Run: systemctl --version
	I1202 20:55:40.437465  758179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:55:40.462348  758179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:55:40.568011  758179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:55:40.584448  758179 pause.go:52] kubelet running: true
	I1202 20:55:40.584543  758179 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 20:55:40.765712  758179 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 20:55:40.765827  758179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 20:55:40.849838  758179 cri.go:89] found id: "8af9f47509e18f95362a22d5dbd4df7f0e64f85d8cf23218eed49b6a2fcf50c8"
	I1202 20:55:40.849866  758179 cri.go:89] found id: "5d356b416e3653870f09b95ab59dd41dd02fd4db8c0ee65696f185f05b58a6f0"
	I1202 20:55:40.849872  758179 cri.go:89] found id: "299a73dcc241327fe5cf3f205be0f0fa45b6267d9d291d2b15d27c02c06717cf"
	I1202 20:55:40.849878  758179 cri.go:89] found id: "7f956e3ba93eb4957689173471e1faef57d87fd2d2ec24476026588c56c69ba2"
	I1202 20:55:40.849882  758179 cri.go:89] found id: "d1dc95faf60a35cdf8dd5e3d023890a6a83f6e6ef58c93949a275bea726c4560"
	I1202 20:55:40.849887  758179 cri.go:89] found id: "c4e1eb06953444823120ccc3fc5298bbaa5c977cbbf41e594e6b162545a4994c"
	I1202 20:55:40.849892  758179 cri.go:89] found id: ""
	I1202 20:55:40.849942  758179 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 20:55:40.862414  758179 retry.go:31] will retry after 153.140735ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:55:40Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:55:41.015792  758179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:55:41.029376  758179 pause.go:52] kubelet running: false
	I1202 20:55:41.029451  758179 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 20:55:41.162636  758179 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 20:55:41.162738  758179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 20:55:41.233656  758179 cri.go:89] found id: "8af9f47509e18f95362a22d5dbd4df7f0e64f85d8cf23218eed49b6a2fcf50c8"
	I1202 20:55:41.233686  758179 cri.go:89] found id: "5d356b416e3653870f09b95ab59dd41dd02fd4db8c0ee65696f185f05b58a6f0"
	I1202 20:55:41.233693  758179 cri.go:89] found id: "299a73dcc241327fe5cf3f205be0f0fa45b6267d9d291d2b15d27c02c06717cf"
	I1202 20:55:41.233698  758179 cri.go:89] found id: "7f956e3ba93eb4957689173471e1faef57d87fd2d2ec24476026588c56c69ba2"
	I1202 20:55:41.233703  758179 cri.go:89] found id: "d1dc95faf60a35cdf8dd5e3d023890a6a83f6e6ef58c93949a275bea726c4560"
	I1202 20:55:41.233708  758179 cri.go:89] found id: "c4e1eb06953444823120ccc3fc5298bbaa5c977cbbf41e594e6b162545a4994c"
	I1202 20:55:41.233713  758179 cri.go:89] found id: ""
	I1202 20:55:41.233763  758179 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 20:55:41.246817  758179 retry.go:31] will retry after 456.609778ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:55:41Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:55:41.704317  758179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:55:41.718847  758179 pause.go:52] kubelet running: false
	I1202 20:55:41.718925  758179 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 20:55:41.838480  758179 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 20:55:41.838564  758179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 20:55:41.918407  758179 cri.go:89] found id: "8af9f47509e18f95362a22d5dbd4df7f0e64f85d8cf23218eed49b6a2fcf50c8"
	I1202 20:55:41.918434  758179 cri.go:89] found id: "5d356b416e3653870f09b95ab59dd41dd02fd4db8c0ee65696f185f05b58a6f0"
	I1202 20:55:41.918441  758179 cri.go:89] found id: "299a73dcc241327fe5cf3f205be0f0fa45b6267d9d291d2b15d27c02c06717cf"
	I1202 20:55:41.918445  758179 cri.go:89] found id: "7f956e3ba93eb4957689173471e1faef57d87fd2d2ec24476026588c56c69ba2"
	I1202 20:55:41.918449  758179 cri.go:89] found id: "d1dc95faf60a35cdf8dd5e3d023890a6a83f6e6ef58c93949a275bea726c4560"
	I1202 20:55:41.918454  758179 cri.go:89] found id: "c4e1eb06953444823120ccc3fc5298bbaa5c977cbbf41e594e6b162545a4994c"
	I1202 20:55:41.918458  758179 cri.go:89] found id: ""
	I1202 20:55:41.918523  758179 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 20:55:41.932698  758179 out.go:203] 
	W1202 20:55:41.933946  758179 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:55:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:55:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 20:55:41.933972  758179 out.go:285] * 
	* 
	W1202 20:55:41.938869  758179 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 20:55:41.940347  758179 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-245604 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-245604
helpers_test.go:243: (dbg) docker inspect newest-cni-245604:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ae60842ee29fe46d926c1b83a674aa62b502c7f9b5da358f869a46142c7f9e7c",
	        "Created": "2025-12-02T20:54:52.492393664Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 754376,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T20:55:29.610346427Z",
	            "FinishedAt": "2025-12-02T20:55:28.3563002Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/ae60842ee29fe46d926c1b83a674aa62b502c7f9b5da358f869a46142c7f9e7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ae60842ee29fe46d926c1b83a674aa62b502c7f9b5da358f869a46142c7f9e7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/ae60842ee29fe46d926c1b83a674aa62b502c7f9b5da358f869a46142c7f9e7c/hosts",
	        "LogPath": "/var/lib/docker/containers/ae60842ee29fe46d926c1b83a674aa62b502c7f9b5da358f869a46142c7f9e7c/ae60842ee29fe46d926c1b83a674aa62b502c7f9b5da358f869a46142c7f9e7c-json.log",
	        "Name": "/newest-cni-245604",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-245604:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-245604",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ae60842ee29fe46d926c1b83a674aa62b502c7f9b5da358f869a46142c7f9e7c",
	                "LowerDir": "/var/lib/docker/overlay2/cadb92bade23480fadfbab75eef8dd705d24c3d8c95f9fa3a23707e903f6c6b9-init/diff:/var/lib/docker/overlay2/49bb44e987885c48600d5ae7ad3c81cad82a00f38070c0460882c5746d9fae59/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cadb92bade23480fadfbab75eef8dd705d24c3d8c95f9fa3a23707e903f6c6b9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cadb92bade23480fadfbab75eef8dd705d24c3d8c95f9fa3a23707e903f6c6b9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cadb92bade23480fadfbab75eef8dd705d24c3d8c95f9fa3a23707e903f6c6b9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-245604",
	                "Source": "/var/lib/docker/volumes/newest-cni-245604/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-245604",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-245604",
	                "name.minikube.sigs.k8s.io": "newest-cni-245604",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "435c72dd59758f237188c11cbde9f7313579b3d64d10737dd108ede2a9c1a214",
	            "SandboxKey": "/var/run/docker/netns/435c72dd5975",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33498"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33499"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33502"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33500"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33501"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-245604": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "417e9d972863c61faff7f9557de77252152a1c936456e0c9e3a58022e688fea1",
	                    "EndpointID": "0df195722373fe48708fc4895b89159a2f90618eef2ac2cb4f9a87be5c54f5b8",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "72:f5:78:e1:d6:15",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-245604",
	                        "ae60842ee29f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-245604 -n newest-cni-245604
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-245604 -n newest-cni-245604: exit status 2 (342.475525ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-245604 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-245604 logs -n 25: (1.104361667s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ ssh     │ -p bridge-775392 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                            │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │                     │
	│ ssh     │ -p bridge-775392 sudo systemctl cat containerd --no-pager                                                                                                                                                                                            │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                     │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo cat /etc/containerd/config.toml                                                                                                                                                                                                │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo containerd config dump                                                                                                                                                                                                         │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                  │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo systemctl cat crio --no-pager                                                                                                                                                                                                  │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                        │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo crio config                                                                                                                                                                                                                    │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-992336 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ delete  │ -p bridge-775392                                                                                                                                                                                                                                     │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ start   │ -p old-k8s-version-992336 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p newest-cni-245604 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable metrics-server -p no-preload-336331 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ stop    │ -p no-preload-336331 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable metrics-server -p newest-cni-245604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-997805 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ stop    │ -p newest-cni-245604 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ stop    │ -p default-k8s-diff-port-997805 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-245604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p newest-cni-245604 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable dashboard -p no-preload-336331 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p no-preload-336331 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ image   │ newest-cni-245604 image list --format=json                                                                                                                                                                                                           │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ pause   │ -p newest-cni-245604 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:55:30
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:55:30.817783  754876 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:55:30.817917  754876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:55:30.817925  754876 out.go:374] Setting ErrFile to fd 2...
	I1202 20:55:30.817930  754876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:55:30.818175  754876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:55:30.818605  754876 out.go:368] Setting JSON to false
	I1202 20:55:30.819738  754876 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9475,"bootTime":1764699456,"procs":368,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:55:30.819805  754876 start.go:143] virtualization: kvm guest
	I1202 20:55:30.821703  754876 out.go:179] * [no-preload-336331] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 20:55:30.823088  754876 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:55:30.823146  754876 notify.go:221] Checking for updates...
	I1202 20:55:30.825567  754876 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:55:30.827040  754876 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:30.828679  754876 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 20:55:30.830055  754876 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:55:30.831383  754876 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:55:30.833014  754876 config.go:182] Loaded profile config "no-preload-336331": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:55:30.833598  754876 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:55:30.858170  754876 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 20:55:30.858305  754876 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:55:30.919062  754876 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-02 20:55:30.90888782 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:55:30.919262  754876 docker.go:319] overlay module found
	I1202 20:55:30.922275  754876 out.go:179] * Using the docker driver based on existing profile
	I1202 20:55:30.923498  754876 start.go:309] selected driver: docker
	I1202 20:55:30.923518  754876 start.go:927] validating driver "docker" against &{Name:no-preload-336331 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-336331 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:55:30.923627  754876 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:55:30.924313  754876 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:55:30.985310  754876 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-02 20:55:30.975402715 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:55:30.985598  754876 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:55:30.985632  754876 cni.go:84] Creating CNI manager for ""
	I1202 20:55:30.985697  754876 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:30.985738  754876 start.go:353] cluster config:
	{Name:no-preload-336331 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-336331 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:55:30.988414  754876 out.go:179] * Starting "no-preload-336331" primary control-plane node in "no-preload-336331" cluster
	I1202 20:55:30.989685  754876 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 20:55:30.991130  754876 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 20:55:30.992530  754876 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 20:55:30.992625  754876 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 20:55:30.992672  754876 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/no-preload-336331/config.json ...
	I1202 20:55:30.992859  754876 cache.go:107] acquiring lock: {Name:mk911a7415c1db6121866a16aaa8d547d8fc27e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:55:30.992955  754876 cache.go:107] acquiring lock: {Name:mk8c99492104b5abf1d260aa0432b08c059c9259 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:55:30.992973  754876 cache.go:107] acquiring lock: {Name:mkda13332b8e3f844bd42c29502a9c7671b1ad3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:55:30.992942  754876 cache.go:107] acquiring lock: {Name:mk01b60fbf34196e8795139c06a53061b5bbef1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:55:30.992999  754876 cache.go:107] acquiring lock: {Name:mk4453b54b86b3689d0543734fa82feede2f4f33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:55:30.992995  754876 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 20:55:30.993061  754876 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 219.403µs
	I1202 20:55:30.993050  754876 cache.go:107] acquiring lock: {Name:mk5eb5d2ea906db41607942a8f8093a266b381cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:55:30.993016  754876 cache.go:107] acquiring lock: {Name:mkf03491d08646dc0a2273e6c20a49756d4e1761 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:55:30.993132  754876 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 20:55:30.993079  754876 cache.go:107] acquiring lock: {Name:mk1ce3ec6c8a0a78faf5ccb0bb487dc5a506ffff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:55:30.993151  754876 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 285.721µs
	I1202 20:55:30.993161  754876 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 20:55:30.993198  754876 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 20:55:30.993213  754876 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 201.134µs
	I1202 20:55:30.993226  754876 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 20:55:30.993111  754876 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 20:55:30.993163  754876 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 20:55:30.993171  754876 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 20:55:30.993251  754876 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 394.074µs
	I1202 20:55:30.993266  754876 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 20:55:30.993256  754876 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 328.109µs
	I1202 20:55:30.993182  754876 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 20:55:30.993276  754876 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 20:55:30.993233  754876 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 20:55:30.993292  754876 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 243.66µs
	I1202 20:55:30.993273  754876 cache.go:115] /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1202 20:55:30.993305  754876 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 20:55:30.993316  754876 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 382.409µs
	I1202 20:55:30.993343  754876 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 20:55:30.993287  754876 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 318.741µs
	I1202 20:55:30.993351  754876 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 20:55:30.993357  754876 cache.go:87] Successfully saved all images to host disk.
	I1202 20:55:31.014607  754876 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 20:55:31.014628  754876 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 20:55:31.014646  754876 cache.go:243] Successfully downloaded all kic artifacts
	I1202 20:55:31.014679  754876 start.go:360] acquireMachinesLock for no-preload-336331: {Name:mk8bc7d2c702916aad4c913aa227a3dc418a34af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:55:31.014744  754876 start.go:364] duration metric: took 42.32µs to acquireMachinesLock for "no-preload-336331"
	I1202 20:55:31.014765  754876 start.go:96] Skipping create...Using existing machine configuration
	I1202 20:55:31.014775  754876 fix.go:54] fixHost starting: 
	I1202 20:55:31.015050  754876 cli_runner.go:164] Run: docker container inspect no-preload-336331 --format={{.State.Status}}
	I1202 20:55:31.032968  754876 fix.go:112] recreateIfNeeded on no-preload-336331: state=Stopped err=<nil>
	W1202 20:55:31.033000  754876 fix.go:138] unexpected machine state, will restart: <nil>
	W1202 20:55:29.545211  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	W1202 20:55:31.546136  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	W1202 20:55:33.546223  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	I1202 20:55:29.583808  754167 out.go:252] * Restarting existing docker container for "newest-cni-245604" ...
	I1202 20:55:29.583878  754167 cli_runner.go:164] Run: docker start newest-cni-245604
	I1202 20:55:29.845255  754167 cli_runner.go:164] Run: docker container inspect newest-cni-245604 --format={{.State.Status}}
	I1202 20:55:29.865371  754167 kic.go:430] container "newest-cni-245604" state is running.
	I1202 20:55:29.865801  754167 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-245604
	I1202 20:55:29.885940  754167 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/config.json ...
	I1202 20:55:29.886183  754167 machine.go:94] provisionDockerMachine start ...
	I1202 20:55:29.886280  754167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:55:29.908907  754167 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:29.909238  754167 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1202 20:55:29.909256  754167 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:55:29.909976  754167 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59006->127.0.0.1:33498: read: connection reset by peer
	I1202 20:55:33.054196  754167 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-245604
	
	I1202 20:55:33.054235  754167 ubuntu.go:182] provisioning hostname "newest-cni-245604"
	I1202 20:55:33.054312  754167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:55:33.072782  754167 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:33.073046  754167 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1202 20:55:33.073062  754167 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-245604 && echo "newest-cni-245604" | sudo tee /etc/hostname
	I1202 20:55:33.225236  754167 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-245604
	
	I1202 20:55:33.225343  754167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:55:33.245018  754167 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:33.245355  754167 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1202 20:55:33.245384  754167 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-245604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-245604/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-245604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:55:33.387898  754167 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:55:33.387925  754167 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-407427/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-407427/.minikube}
	I1202 20:55:33.387964  754167 ubuntu.go:190] setting up certificates
	I1202 20:55:33.387982  754167 provision.go:84] configureAuth start
	I1202 20:55:33.388077  754167 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-245604
	I1202 20:55:33.406744  754167 provision.go:143] copyHostCerts
	I1202 20:55:33.406813  754167 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem, removing ...
	I1202 20:55:33.406826  754167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem
	I1202 20:55:33.406911  754167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem (1082 bytes)
	I1202 20:55:33.407033  754167 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem, removing ...
	I1202 20:55:33.407046  754167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem
	I1202 20:55:33.407100  754167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem (1123 bytes)
	I1202 20:55:33.407169  754167 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem, removing ...
	I1202 20:55:33.407178  754167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem
	I1202 20:55:33.407202  754167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem (1675 bytes)
	I1202 20:55:33.407250  754167 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem org=jenkins.newest-cni-245604 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-245604]
	I1202 20:55:33.499290  754167 provision.go:177] copyRemoteCerts
	I1202 20:55:33.499351  754167 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:55:33.499387  754167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:55:33.518850  754167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:55:33.621457  754167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 20:55:33.640855  754167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1202 20:55:33.660434  754167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:55:33.679568  754167 provision.go:87] duration metric: took 291.566095ms to configureAuth
	I1202 20:55:33.679599  754167 ubuntu.go:206] setting minikube options for container-runtime
	I1202 20:55:33.679815  754167 config.go:182] Loaded profile config "newest-cni-245604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:55:33.679945  754167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:55:33.698876  754167 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:33.699153  754167 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1202 20:55:33.699176  754167 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:55:34.005396  754167 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:55:34.005428  754167 machine.go:97] duration metric: took 4.119227362s to provisionDockerMachine
	I1202 20:55:34.005444  754167 start.go:293] postStartSetup for "newest-cni-245604" (driver="docker")
	I1202 20:55:34.005461  754167 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:55:34.005538  754167 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:55:34.005583  754167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:55:34.023961  754167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:55:34.126123  754167 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:55:34.129976  754167 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:55:34.130006  754167 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:55:34.130021  754167 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 20:55:34.130119  754167 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 20:55:34.130236  754167 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem -> 4110322.pem in /etc/ssl/certs
	I1202 20:55:34.130375  754167 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:55:34.138590  754167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:55:34.157867  754167 start.go:296] duration metric: took 152.406161ms for postStartSetup
	I1202 20:55:34.157947  754167 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:55:34.158020  754167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:55:34.176629  754167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:55:34.273761  754167 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:55:34.278786  754167 fix.go:56] duration metric: took 4.715776468s for fixHost
	I1202 20:55:34.278822  754167 start.go:83] releasing machines lock for "newest-cni-245604", held for 4.715869116s
	I1202 20:55:34.278885  754167 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-245604
	I1202 20:55:34.297512  754167 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:55:34.297593  754167 ssh_runner.go:195] Run: cat /version.json
	I1202 20:55:34.297609  754167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:55:34.297635  754167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:55:34.317484  754167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:55:34.317642  754167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:55:34.471737  754167 ssh_runner.go:195] Run: systemctl --version
	I1202 20:55:34.479026  754167 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:55:34.516898  754167 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:55:34.522423  754167 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:55:34.522506  754167 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:55:34.531240  754167 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 20:55:34.531281  754167 start.go:496] detecting cgroup driver to use...
	I1202 20:55:34.531311  754167 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 20:55:34.531353  754167 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:55:34.547158  754167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:55:34.561028  754167 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:55:34.561113  754167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:55:34.578040  754167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:55:34.591736  754167 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:55:34.677205  754167 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:55:34.760920  754167 docker.go:234] disabling docker service ...
	I1202 20:55:34.760986  754167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:55:34.776488  754167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:55:34.789702  754167 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:55:34.878420  754167 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:55:34.966185  754167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:55:34.979419  754167 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:55:34.997130  754167 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:55:34.997197  754167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:35.007334  754167 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 20:55:35.007415  754167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:35.017398  754167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:35.026845  754167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:35.036745  754167 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:55:35.046293  754167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:35.056408  754167 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:35.065760  754167 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:35.075350  754167 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:55:35.083931  754167 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:55:35.092954  754167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:35.184874  754167 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:55:35.327040  754167 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:55:35.327135  754167 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:55:35.331621  754167 start.go:564] Will wait 60s for crictl version
	I1202 20:55:35.331687  754167 ssh_runner.go:195] Run: which crictl
	I1202 20:55:35.335950  754167 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:55:35.364046  754167 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:55:35.364159  754167 ssh_runner.go:195] Run: crio --version
	I1202 20:55:35.399152  754167 ssh_runner.go:195] Run: crio --version
	I1202 20:55:35.433115  754167 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 20:55:35.434823  754167 cli_runner.go:164] Run: docker network inspect newest-cni-245604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:55:35.455885  754167 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1202 20:55:35.460632  754167 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:35.475593  754167 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1202 20:55:31.035723  754876 out.go:252] * Restarting existing docker container for "no-preload-336331" ...
	I1202 20:55:31.035816  754876 cli_runner.go:164] Run: docker start no-preload-336331
	I1202 20:55:31.292851  754876 cli_runner.go:164] Run: docker container inspect no-preload-336331 --format={{.State.Status}}
	I1202 20:55:31.313769  754876 kic.go:430] container "no-preload-336331" state is running.
	I1202 20:55:31.314216  754876 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-336331
	I1202 20:55:31.334780  754876 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/no-preload-336331/config.json ...
	I1202 20:55:31.335048  754876 machine.go:94] provisionDockerMachine start ...
	I1202 20:55:31.335151  754876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-336331
	I1202 20:55:31.354591  754876 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:31.354858  754876 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I1202 20:55:31.354872  754876 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:55:31.355566  754876 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58170->127.0.0.1:33503: read: connection reset by peer
	I1202 20:55:34.502238  754876 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-336331
	
	I1202 20:55:34.502271  754876 ubuntu.go:182] provisioning hostname "no-preload-336331"
	I1202 20:55:34.502348  754876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-336331
	I1202 20:55:34.523321  754876 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:34.523623  754876 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I1202 20:55:34.523645  754876 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-336331 && echo "no-preload-336331" | sudo tee /etc/hostname
	I1202 20:55:34.677019  754876 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-336331
	
	I1202 20:55:34.677147  754876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-336331
	I1202 20:55:34.698753  754876 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:34.699053  754876 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I1202 20:55:34.699093  754876 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-336331' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-336331/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-336331' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:55:34.845454  754876 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:55:34.845491  754876 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-407427/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-407427/.minikube}
	I1202 20:55:34.845521  754876 ubuntu.go:190] setting up certificates
	I1202 20:55:34.845534  754876 provision.go:84] configureAuth start
	I1202 20:55:34.845620  754876 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-336331
	I1202 20:55:34.864441  754876 provision.go:143] copyHostCerts
	I1202 20:55:34.864523  754876 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem, removing ...
	I1202 20:55:34.864540  754876 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem
	I1202 20:55:34.864623  754876 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem (1123 bytes)
	I1202 20:55:34.864771  754876 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem, removing ...
	I1202 20:55:34.864783  754876 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem
	I1202 20:55:34.864816  754876 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem (1675 bytes)
	I1202 20:55:34.864916  754876 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem, removing ...
	I1202 20:55:34.864927  754876 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem
	I1202 20:55:34.864959  754876 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem (1082 bytes)
	I1202 20:55:34.865098  754876 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem org=jenkins.no-preload-336331 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-336331]
	I1202 20:55:34.965191  754876 provision.go:177] copyRemoteCerts
	I1202 20:55:34.965254  754876 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:55:34.965289  754876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-336331
	I1202 20:55:34.984766  754876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/no-preload-336331/id_rsa Username:docker}
	I1202 20:55:35.087412  754876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:55:35.107670  754876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 20:55:35.132631  754876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 20:55:35.152077  754876 provision.go:87] duration metric: took 306.514735ms to configureAuth
	I1202 20:55:35.152112  754876 ubuntu.go:206] setting minikube options for container-runtime
	I1202 20:55:35.152298  754876 config.go:182] Loaded profile config "no-preload-336331": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:55:35.152410  754876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-336331
	I1202 20:55:35.174310  754876 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:35.174593  754876 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I1202 20:55:35.174612  754876 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:55:35.528736  754876 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:55:35.528765  754876 machine.go:97] duration metric: took 4.193699037s to provisionDockerMachine
	I1202 20:55:35.528779  754876 start.go:293] postStartSetup for "no-preload-336331" (driver="docker")
	I1202 20:55:35.528791  754876 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:55:35.528869  754876 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:55:35.528917  754876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-336331
	I1202 20:55:35.551783  754876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/no-preload-336331/id_rsa Username:docker}
	I1202 20:55:35.654832  754876 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:55:35.658852  754876 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:55:35.658876  754876 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:55:35.658889  754876 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 20:55:35.658945  754876 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 20:55:35.659041  754876 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem -> 4110322.pem in /etc/ssl/certs
	I1202 20:55:35.659175  754876 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:55:35.668601  754876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:55:35.693095  754876 start.go:296] duration metric: took 164.297051ms for postStartSetup
	I1202 20:55:35.693180  754876 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:55:35.693216  754876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-336331
	I1202 20:55:35.714085  754876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/no-preload-336331/id_rsa Username:docker}
	I1202 20:55:35.814181  754876 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:55:35.477017  754167 kubeadm.go:884] updating cluster {Name:newest-cni-245604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-245604 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:55:35.477235  754167 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 20:55:35.477292  754167 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:35.510123  754167 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:35.510148  754167 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:55:35.510155  754167 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1202 20:55:35.510312  754167 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-245604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-245604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:55:35.510401  754167 ssh_runner.go:195] Run: crio config
	I1202 20:55:35.565051  754167 cni.go:84] Creating CNI manager for ""
	I1202 20:55:35.565090  754167 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:35.565111  754167 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1202 20:55:35.565142  754167 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-245604 NodeName:newest-cni-245604 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:55:35.565332  754167 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-245604"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:55:35.565418  754167 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 20:55:35.574641  754167 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:55:35.574718  754167 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:55:35.583019  754167 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1202 20:55:35.596541  754167 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 20:55:35.611259  754167 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1202 20:55:35.625277  754167 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1202 20:55:35.629197  754167 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:35.640318  754167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:35.725146  754167 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:55:35.754821  754167 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604 for IP: 192.168.103.2
	I1202 20:55:35.754849  754167 certs.go:195] generating shared ca certs ...
	I1202 20:55:35.754872  754167 certs.go:227] acquiring lock for ca certs: {Name:mkea11924193dd4dc459e16d70207bde383196df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:35.755040  754167 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key
	I1202 20:55:35.755121  754167 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key
	I1202 20:55:35.755134  754167 certs.go:257] generating profile certs ...
	I1202 20:55:35.755230  754167 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/client.key
	I1202 20:55:35.755318  754167 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.key.b0e612d2
	I1202 20:55:35.755363  754167 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/proxy-client.key
	I1202 20:55:35.755480  754167 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem (1338 bytes)
	W1202 20:55:35.755516  754167 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032_empty.pem, impossibly tiny 0 bytes
	I1202 20:55:35.755525  754167 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 20:55:35.755556  754167 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:55:35.755579  754167 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:55:35.755601  754167 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem (1675 bytes)
	I1202 20:55:35.755649  754167 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:55:35.756837  754167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:55:35.777809  754167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:55:35.798547  754167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:55:35.822171  754167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:55:35.850339  754167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 20:55:35.874856  754167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 20:55:35.894151  754167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:55:35.914600  754167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/newest-cni-245604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 20:55:35.934765  754167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /usr/share/ca-certificates/4110322.pem (1708 bytes)
	I1202 20:55:35.953725  754167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:55:35.974457  754167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem --> /usr/share/ca-certificates/411032.pem (1338 bytes)
	I1202 20:55:35.993023  754167 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:55:36.006712  754167 ssh_runner.go:195] Run: openssl version
	I1202 20:55:36.013165  754167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:55:36.022226  754167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:36.026258  754167 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:36.026319  754167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:36.064103  754167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:55:36.072903  754167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/411032.pem && ln -fs /usr/share/ca-certificates/411032.pem /etc/ssl/certs/411032.pem"
	I1202 20:55:36.082740  754167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/411032.pem
	I1202 20:55:36.086790  754167 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 20:13 /usr/share/ca-certificates/411032.pem
	I1202 20:55:36.086845  754167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/411032.pem
	I1202 20:55:36.122744  754167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/411032.pem /etc/ssl/certs/51391683.0"
	I1202 20:55:36.131758  754167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110322.pem && ln -fs /usr/share/ca-certificates/4110322.pem /etc/ssl/certs/4110322.pem"
	I1202 20:55:36.141418  754167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110322.pem
	I1202 20:55:36.146236  754167 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 20:13 /usr/share/ca-certificates/4110322.pem
	I1202 20:55:36.146306  754167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110322.pem
	I1202 20:55:36.188792  754167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4110322.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:55:36.197928  754167 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:55:36.202669  754167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 20:55:36.238962  754167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 20:55:36.285583  754167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 20:55:36.333227  754167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 20:55:36.388408  754167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 20:55:36.453530  754167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 20:55:36.504694  754167 kubeadm.go:401] StartCluster: {Name:newest-cni-245604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-245604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:55:36.504804  754167 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:55:36.504858  754167 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:55:36.541939  754167 cri.go:89] found id: "299a73dcc241327fe5cf3f205be0f0fa45b6267d9d291d2b15d27c02c06717cf"
	I1202 20:55:36.541962  754167 cri.go:89] found id: "7f956e3ba93eb4957689173471e1faef57d87fd2d2ec24476026588c56c69ba2"
	I1202 20:55:36.541968  754167 cri.go:89] found id: "d1dc95faf60a35cdf8dd5e3d023890a6a83f6e6ef58c93949a275bea726c4560"
	I1202 20:55:36.541972  754167 cri.go:89] found id: "c4e1eb06953444823120ccc3fc5298bbaa5c977cbbf41e594e6b162545a4994c"
	I1202 20:55:36.541977  754167 cri.go:89] found id: ""
	I1202 20:55:36.542024  754167 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 20:55:36.565136  754167 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:55:36Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:55:36.565210  754167 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:55:36.574607  754167 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 20:55:36.574633  754167 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 20:55:36.574690  754167 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 20:55:36.583461  754167 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:55:36.584384  754167 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-245604" does not appear in /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:36.584909  754167 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-407427/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-245604" cluster setting kubeconfig missing "newest-cni-245604" context setting]
	I1202 20:55:36.585784  754167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:36.587810  754167 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 20:55:36.599897  754167 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1202 20:55:36.599942  754167 kubeadm.go:602] duration metric: took 25.294925ms to restartPrimaryControlPlane
	I1202 20:55:36.599954  754167 kubeadm.go:403] duration metric: took 95.272107ms to StartCluster
	I1202 20:55:36.599982  754167 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:36.600061  754167 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:36.601193  754167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:36.601486  754167 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:55:36.601573  754167 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:55:36.601678  754167 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-245604"
	I1202 20:55:36.601703  754167 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-245604"
	W1202 20:55:36.601711  754167 addons.go:248] addon storage-provisioner should already be in state true
	I1202 20:55:36.601705  754167 addons.go:70] Setting dashboard=true in profile "newest-cni-245604"
	I1202 20:55:36.601726  754167 addons.go:239] Setting addon dashboard=true in "newest-cni-245604"
	W1202 20:55:36.601740  754167 addons.go:248] addon dashboard should already be in state true
	I1202 20:55:36.601742  754167 host.go:66] Checking if "newest-cni-245604" exists ...
	I1202 20:55:36.601734  754167 addons.go:70] Setting default-storageclass=true in profile "newest-cni-245604"
	I1202 20:55:36.601765  754167 host.go:66] Checking if "newest-cni-245604" exists ...
	I1202 20:55:36.601764  754167 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-245604"
	I1202 20:55:36.602129  754167 cli_runner.go:164] Run: docker container inspect newest-cni-245604 --format={{.State.Status}}
	I1202 20:55:36.602286  754167 cli_runner.go:164] Run: docker container inspect newest-cni-245604 --format={{.State.Status}}
	I1202 20:55:36.602355  754167 cli_runner.go:164] Run: docker container inspect newest-cni-245604 --format={{.State.Status}}
	I1202 20:55:36.602681  754167 config.go:182] Loaded profile config "newest-cni-245604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:55:36.607394  754167 out.go:179] * Verifying Kubernetes components...
	I1202 20:55:36.608829  754167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:36.629466  754167 addons.go:239] Setting addon default-storageclass=true in "newest-cni-245604"
	W1202 20:55:36.629491  754167 addons.go:248] addon default-storageclass should already be in state true
	I1202 20:55:36.629524  754167 host.go:66] Checking if "newest-cni-245604" exists ...
	I1202 20:55:36.630089  754167 cli_runner.go:164] Run: docker container inspect newest-cni-245604 --format={{.State.Status}}
	I1202 20:55:36.634627  754167 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:55:36.634691  754167 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 20:55:36.636580  754167 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:36.636604  754167 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:55:36.636658  754167 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 20:55:35.820385  754876 fix.go:56] duration metric: took 4.805602849s for fixHost
	I1202 20:55:35.820422  754876 start.go:83] releasing machines lock for "no-preload-336331", held for 4.80566728s
	I1202 20:55:35.820504  754876 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-336331
	I1202 20:55:35.846277  754876 ssh_runner.go:195] Run: cat /version.json
	I1202 20:55:35.846354  754876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-336331
	I1202 20:55:35.846523  754876 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:55:35.846722  754876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-336331
	I1202 20:55:35.870812  754876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/no-preload-336331/id_rsa Username:docker}
	I1202 20:55:35.871357  754876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/no-preload-336331/id_rsa Username:docker}
	I1202 20:55:35.969307  754876 ssh_runner.go:195] Run: systemctl --version
	I1202 20:55:36.029257  754876 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:55:36.067995  754876 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:55:36.073190  754876 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:55:36.073249  754876 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:55:36.082104  754876 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 20:55:36.082127  754876 start.go:496] detecting cgroup driver to use...
	I1202 20:55:36.082161  754876 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 20:55:36.082227  754876 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:55:36.098741  754876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:55:36.112109  754876 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:55:36.112165  754876 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:55:36.128398  754876 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:55:36.143189  754876 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:55:36.231026  754876 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:55:36.337439  754876 docker.go:234] disabling docker service ...
	I1202 20:55:36.337508  754876 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:55:36.361832  754876 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:55:36.379661  754876 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:55:36.501877  754876 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:55:36.630027  754876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:55:36.651957  754876 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:55:36.683462  754876 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:55:36.683533  754876 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:36.699588  754876 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 20:55:36.699670  754876 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:36.712135  754876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:36.723293  754876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:36.735406  754876 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:55:36.745507  754876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:36.758612  754876 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:36.771554  754876 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:36.785872  754876 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:55:36.799501  754876 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:55:36.811519  754876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:36.931781  754876 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:55:37.099846  754876 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:55:37.099936  754876 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:55:37.104702  754876 start.go:564] Will wait 60s for crictl version
	I1202 20:55:37.104783  754876 ssh_runner.go:195] Run: which crictl
	I1202 20:55:37.109725  754876 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:55:37.143623  754876 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:55:37.143815  754876 ssh_runner.go:195] Run: crio --version
	I1202 20:55:37.178031  754876 ssh_runner.go:195] Run: crio --version
	I1202 20:55:37.214938  754876 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 20:55:36.636685  754167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:55:36.638930  754167 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 20:55:36.638957  754167 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 20:55:36.639030  754167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:55:36.665983  754167 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:36.666107  754167 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:55:36.666215  754167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-245604
	I1202 20:55:36.674421  754167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:55:36.675384  754167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:55:36.691840  754167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/newest-cni-245604/id_rsa Username:docker}
	I1202 20:55:36.768596  754167 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:55:36.788522  754167 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:55:36.788601  754167 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:55:36.801331  754167 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 20:55:36.801361  754167 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 20:55:36.801394  754167 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:36.809676  754167 api_server.go:72] duration metric: took 208.145514ms to wait for apiserver process to appear ...
	I1202 20:55:36.809708  754167 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:55:36.809733  754167 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 20:55:36.813591  754167 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:36.820786  754167 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 20:55:36.820816  754167 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 20:55:36.840518  754167 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 20:55:36.840547  754167 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 20:55:36.876371  754167 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 20:55:36.876397  754167 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 20:55:36.896551  754167 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 20:55:36.896621  754167 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 20:55:36.914247  754167 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 20:55:36.914275  754167 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 20:55:36.931306  754167 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 20:55:36.931344  754167 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 20:55:36.948230  754167 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 20:55:36.948258  754167 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 20:55:36.965772  754167 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:55:36.965800  754167 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 20:55:36.985375  754167 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:55:37.813819  754167 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1202 20:55:37.813865  754167 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1202 20:55:37.813880  754167 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 20:55:37.825883  754167 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1202 20:55:37.825917  754167 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1202 20:55:38.310854  754167 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 20:55:38.320038  754167 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 20:55:38.320096  754167 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 20:55:38.532017  754167 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.730588261s)
	I1202 20:55:38.532140  754167 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.718523685s)
	I1202 20:55:38.532806  754167 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.54738608s)
	I1202 20:55:38.536203  754167 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-245604 addons enable metrics-server
	
	I1202 20:55:38.548985  754167 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1202 20:55:37.217915  754876 cli_runner.go:164] Run: docker network inspect no-preload-336331 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:55:37.237442  754876 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1202 20:55:37.242420  754876 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:37.255328  754876 kubeadm.go:884] updating cluster {Name:no-preload-336331 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-336331 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:55:37.255503  754876 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 20:55:37.255545  754876 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:37.292726  754876 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:37.292753  754876 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:55:37.292762  754876 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1202 20:55:37.292896  754876 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-336331 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-336331 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:55:37.292988  754876 ssh_runner.go:195] Run: crio config
	I1202 20:55:37.353757  754876 cni.go:84] Creating CNI manager for ""
	I1202 20:55:37.353786  754876 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:37.353805  754876 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:55:37.353835  754876 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-336331 NodeName:no-preload-336331 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:55:37.354006  754876 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-336331"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:55:37.354125  754876 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 20:55:37.363524  754876 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:55:37.363598  754876 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:55:37.372628  754876 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1202 20:55:37.388745  754876 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 20:55:37.403632  754876 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1202 20:55:37.417939  754876 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1202 20:55:37.424980  754876 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:37.442513  754876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:37.527222  754876 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:55:37.558886  754876 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/no-preload-336331 for IP: 192.168.76.2
	I1202 20:55:37.558915  754876 certs.go:195] generating shared ca certs ...
	I1202 20:55:37.558934  754876 certs.go:227] acquiring lock for ca certs: {Name:mkea11924193dd4dc459e16d70207bde383196df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:37.559141  754876 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key
	I1202 20:55:37.559235  754876 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key
	I1202 20:55:37.559254  754876 certs.go:257] generating profile certs ...
	I1202 20:55:37.559365  754876 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/no-preload-336331/client.key
	I1202 20:55:37.559465  754876 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/no-preload-336331/apiserver.key.9874f3c1
	I1202 20:55:37.559538  754876 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/no-preload-336331/proxy-client.key
	I1202 20:55:37.559698  754876 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem (1338 bytes)
	W1202 20:55:37.559745  754876 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032_empty.pem, impossibly tiny 0 bytes
	I1202 20:55:37.559762  754876 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 20:55:37.559797  754876 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:55:37.559831  754876 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:55:37.559865  754876 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem (1675 bytes)
	I1202 20:55:37.559934  754876 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:55:37.560809  754876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:55:37.584511  754876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:55:37.606272  754876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:55:37.628219  754876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:55:37.657274  754876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/no-preload-336331/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 20:55:37.682293  754876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/no-preload-336331/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 20:55:37.706745  754876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/no-preload-336331/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:55:37.736098  754876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/no-preload-336331/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 20:55:37.760170  754876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:55:37.781340  754876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem --> /usr/share/ca-certificates/411032.pem (1338 bytes)
	I1202 20:55:37.804887  754876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /usr/share/ca-certificates/4110322.pem (1708 bytes)
	I1202 20:55:37.839727  754876 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:55:37.866023  754876 ssh_runner.go:195] Run: openssl version
	I1202 20:55:37.876438  754876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/411032.pem && ln -fs /usr/share/ca-certificates/411032.pem /etc/ssl/certs/411032.pem"
	I1202 20:55:37.890144  754876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/411032.pem
	I1202 20:55:37.899418  754876 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 20:13 /usr/share/ca-certificates/411032.pem
	I1202 20:55:37.899491  754876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/411032.pem
	I1202 20:55:37.960736  754876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/411032.pem /etc/ssl/certs/51391683.0"
	I1202 20:55:37.973140  754876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110322.pem && ln -fs /usr/share/ca-certificates/4110322.pem /etc/ssl/certs/4110322.pem"
	I1202 20:55:37.984052  754876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110322.pem
	I1202 20:55:37.989111  754876 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 20:13 /usr/share/ca-certificates/4110322.pem
	I1202 20:55:37.989184  754876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110322.pem
	I1202 20:55:38.034974  754876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4110322.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:55:38.045842  754876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:55:38.056456  754876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:38.061115  754876 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:38.061178  754876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:38.113012  754876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:55:38.124545  754876 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:55:38.129948  754876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 20:55:38.184337  754876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 20:55:38.245536  754876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 20:55:38.308092  754876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 20:55:38.367756  754876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 20:55:38.430249  754876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 20:55:38.476667  754876 kubeadm.go:401] StartCluster: {Name:no-preload-336331 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-336331 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:55:38.476796  754876 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:55:38.476865  754876 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:55:38.514725  754876 cri.go:89] found id: "fe483c8206ed4feb9f82c31650dd1c179edfd56fdbd85b46b0866b331f6ea99d"
	I1202 20:55:38.514752  754876 cri.go:89] found id: "8a39789ad0781128fb83397c05c270ff26c09bd32ec5d4c90b8ca4d3a01533cd"
	I1202 20:55:38.514758  754876 cri.go:89] found id: "cec9f1979d354143b12bba5938c36bf941dd1a2a9c5096761b95b27d36bc9e59"
	I1202 20:55:38.514764  754876 cri.go:89] found id: "9d960cc48cf5c1a7210c34cfa4e205107d9dd729104ed2798e71e12ba001d7ec"
	I1202 20:55:38.514768  754876 cri.go:89] found id: ""
	I1202 20:55:38.514819  754876 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 20:55:38.530176  754876 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:55:38Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:55:38.530260  754876 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:55:38.542247  754876 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 20:55:38.542274  754876 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 20:55:38.542323  754876 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 20:55:38.554324  754876 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:55:38.555469  754876 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-336331" does not appear in /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:38.555996  754876 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-407427/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-336331" cluster setting kubeconfig missing "no-preload-336331" context setting]
	I1202 20:55:38.556665  754876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:38.558593  754876 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 20:55:38.568608  754876 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1202 20:55:38.568642  754876 kubeadm.go:602] duration metric: took 26.361754ms to restartPrimaryControlPlane
	I1202 20:55:38.568653  754876 kubeadm.go:403] duration metric: took 92.000112ms to StartCluster
	I1202 20:55:38.568673  754876 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:38.568751  754876 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:38.570735  754876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:38.571020  754876 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:55:38.571230  754876 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:55:38.571326  754876 addons.go:70] Setting storage-provisioner=true in profile "no-preload-336331"
	I1202 20:55:38.571346  754876 addons.go:239] Setting addon storage-provisioner=true in "no-preload-336331"
	W1202 20:55:38.571355  754876 addons.go:248] addon storage-provisioner should already be in state true
	I1202 20:55:38.571394  754876 host.go:66] Checking if "no-preload-336331" exists ...
	I1202 20:55:38.571903  754876 cli_runner.go:164] Run: docker container inspect no-preload-336331 --format={{.State.Status}}
	I1202 20:55:38.572327  754876 config.go:182] Loaded profile config "no-preload-336331": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:55:38.572328  754876 addons.go:70] Setting dashboard=true in profile "no-preload-336331"
	I1202 20:55:38.572352  754876 addons.go:239] Setting addon dashboard=true in "no-preload-336331"
	W1202 20:55:38.572362  754876 addons.go:248] addon dashboard should already be in state true
	I1202 20:55:38.572394  754876 host.go:66] Checking if "no-preload-336331" exists ...
	I1202 20:55:38.572396  754876 addons.go:70] Setting default-storageclass=true in profile "no-preload-336331"
	I1202 20:55:38.572425  754876 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-336331"
	I1202 20:55:38.572757  754876 cli_runner.go:164] Run: docker container inspect no-preload-336331 --format={{.State.Status}}
	I1202 20:55:38.572898  754876 cli_runner.go:164] Run: docker container inspect no-preload-336331 --format={{.State.Status}}
	I1202 20:55:38.573739  754876 out.go:179] * Verifying Kubernetes components...
	I1202 20:55:38.578305  754876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:38.599247  754876 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 20:55:38.599247  754876 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:55:38.599540  754876 addons.go:239] Setting addon default-storageclass=true in "no-preload-336331"
	W1202 20:55:38.599564  754876 addons.go:248] addon default-storageclass should already be in state true
	I1202 20:55:38.599593  754876 host.go:66] Checking if "no-preload-336331" exists ...
	I1202 20:55:38.600150  754876 cli_runner.go:164] Run: docker container inspect no-preload-336331 --format={{.State.Status}}
	I1202 20:55:38.601036  754876 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:38.601056  754876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:55:38.601332  754876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-336331
	I1202 20:55:38.602448  754876 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1202 20:55:35.546507  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	W1202 20:55:38.045928  743547 pod_ready.go:104] pod "coredns-5dd5756b68-ptzsf" is not "Ready", error: <nil>
	I1202 20:55:38.547944  743547 pod_ready.go:94] pod "coredns-5dd5756b68-ptzsf" is "Ready"
	I1202 20:55:38.547984  743547 pod_ready.go:86] duration metric: took 37.508045016s for pod "coredns-5dd5756b68-ptzsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:38.552483  743547 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-992336" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:38.558420  743547 pod_ready.go:94] pod "etcd-old-k8s-version-992336" is "Ready"
	I1202 20:55:38.558444  743547 pod_ready.go:86] duration metric: took 5.919253ms for pod "etcd-old-k8s-version-992336" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:38.561764  743547 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-992336" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:38.566885  743547 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-992336" is "Ready"
	I1202 20:55:38.566920  743547 pod_ready.go:86] duration metric: took 5.129398ms for pod "kube-apiserver-old-k8s-version-992336" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:38.570494  743547 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-992336" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:38.551843  754167 addons.go:530] duration metric: took 1.950275832s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1202 20:55:38.809893  754167 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 20:55:38.817804  754167 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 20:55:38.817838  754167 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 20:55:39.310451  754167 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 20:55:39.314768  754167 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1202 20:55:39.316004  754167 api_server.go:141] control plane version: v1.35.0-beta.0
	I1202 20:55:39.316040  754167 api_server.go:131] duration metric: took 2.506325431s to wait for apiserver health ...
	I1202 20:55:39.316050  754167 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:55:39.320024  754167 system_pods.go:59] 8 kube-system pods found
	I1202 20:55:39.320090  754167 system_pods.go:61] "coredns-7d764666f9-blfz2" [431846c2-b261-4ac9-ae34-f5e7c9bd7c30] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1202 20:55:39.320108  754167 system_pods.go:61] "etcd-newest-cni-245604" [0153ab66-c89e-4cb9-956f-af095ae01a6d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:55:39.320115  754167 system_pods.go:61] "kindnet-flbpz" [5931b461-203e-4906-9cb7-0a7ddcf9c5ae] Running
	I1202 20:55:39.320125  754167 system_pods.go:61] "kube-apiserver-newest-cni-245604" [aedbda6a-d95b-4616-9c31-4931593df7d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:55:39.320139  754167 system_pods.go:61] "kube-controller-manager-newest-cni-245604" [f659dbd1-c031-4078-a1e3-e75ac74f2ea4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:55:39.320148  754167 system_pods.go:61] "kube-proxy-khm6s" [990486ba-3da5-4666-b441-52e3fcc4c81f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 20:55:39.320156  754167 system_pods.go:61] "kube-scheduler-newest-cni-245604" [652fff1e-9b61-4947-a077-8f039064ad96] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:55:39.320161  754167 system_pods.go:61] "storage-provisioner" [6eb8872b-114f-434c-b0ca-a8eaa4c5da9e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1202 20:55:39.320183  754167 system_pods.go:74] duration metric: took 4.114258ms to wait for pod list to return data ...
	I1202 20:55:39.320195  754167 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:55:39.323003  754167 default_sa.go:45] found service account: "default"
	I1202 20:55:39.323027  754167 default_sa.go:55] duration metric: took 2.826959ms for default service account to be created ...
	I1202 20:55:39.323040  754167 kubeadm.go:587] duration metric: took 2.721517639s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 20:55:39.323057  754167 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:55:39.325889  754167 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 20:55:39.325917  754167 node_conditions.go:123] node cpu capacity is 8
	I1202 20:55:39.325931  754167 node_conditions.go:105] duration metric: took 2.86783ms to run NodePressure ...
	I1202 20:55:39.325945  754167 start.go:242] waiting for startup goroutines ...
	I1202 20:55:39.325952  754167 start.go:247] waiting for cluster config update ...
	I1202 20:55:39.325962  754167 start.go:256] writing updated cluster config ...
	I1202 20:55:39.326270  754167 ssh_runner.go:195] Run: rm -f paused
	I1202 20:55:39.387043  754167 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1202 20:55:39.389967  754167 out.go:179] * Done! kubectl is now configured to use "newest-cni-245604" cluster and "default" namespace by default
	I1202 20:55:38.744638  743547 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-992336" is "Ready"
	I1202 20:55:38.744674  743547 pod_ready.go:86] duration metric: took 174.151338ms for pod "kube-controller-manager-old-k8s-version-992336" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:38.945657  743547 pod_ready.go:83] waiting for pod "kube-proxy-qpzt8" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:39.344786  743547 pod_ready.go:94] pod "kube-proxy-qpzt8" is "Ready"
	I1202 20:55:39.344822  743547 pod_ready.go:86] duration metric: took 399.134453ms for pod "kube-proxy-qpzt8" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:39.545189  743547 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-992336" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:39.946507  743547 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-992336" is "Ready"
	I1202 20:55:39.946544  743547 pod_ready.go:86] duration metric: took 401.329171ms for pod "kube-scheduler-old-k8s-version-992336" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:55:39.946561  743547 pod_ready.go:40] duration metric: took 38.911613029s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:55:40.024107  743547 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1202 20:55:40.026891  743547 out.go:203] 
	W1202 20:55:40.028292  743547 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1202 20:55:40.031751  743547 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1202 20:55:40.033710  743547 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-992336" cluster and "default" namespace by default
	I1202 20:55:38.603837  754876 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 20:55:38.603859  754876 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 20:55:38.603927  754876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-336331
	I1202 20:55:38.638496  754876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/no-preload-336331/id_rsa Username:docker}
	I1202 20:55:38.638834  754876 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:38.638866  754876 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:55:38.638933  754876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-336331
	I1202 20:55:38.639408  754876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/no-preload-336331/id_rsa Username:docker}
	I1202 20:55:38.661343  754876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/no-preload-336331/id_rsa Username:docker}
	I1202 20:55:38.726129  754876 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:55:38.743906  754876 node_ready.go:35] waiting up to 6m0s for node "no-preload-336331" to be "Ready" ...
	I1202 20:55:38.755953  754876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:38.758458  754876 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 20:55:38.758484  754876 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 20:55:38.780610  754876 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 20:55:38.780642  754876 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 20:55:38.789463  754876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:38.799980  754876 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 20:55:38.800024  754876 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 20:55:38.827012  754876 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 20:55:38.827038  754876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 20:55:38.847754  754876 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 20:55:38.847789  754876 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 20:55:38.870373  754876 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 20:55:38.870401  754876 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 20:55:38.888662  754876 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 20:55:38.888702  754876 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 20:55:38.907704  754876 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 20:55:38.907741  754876 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 20:55:38.928298  754876 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:55:38.928328  754876 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 20:55:38.946489  754876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:55:39.622332  754876 node_ready.go:49] node "no-preload-336331" is "Ready"
	I1202 20:55:39.622378  754876 node_ready.go:38] duration metric: took 878.437194ms for node "no-preload-336331" to be "Ready" ...
	I1202 20:55:39.622398  754876 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:55:39.622460  754876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:55:40.421354  754876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.63184639s)
	I1202 20:55:40.421372  754876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.665382823s)
	I1202 20:55:40.421721  754876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.475195165s)
	I1202 20:55:40.422960  754876 api_server.go:72] duration metric: took 1.851900644s to wait for apiserver process to appear ...
	I1202 20:55:40.422982  754876 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:55:40.423009  754876 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1202 20:55:40.424024  754876 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-336331 addons enable metrics-server
	
	I1202 20:55:40.429331  754876 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 20:55:40.429433  754876 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 20:55:40.432246  754876 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1202 20:55:40.433504  754876 addons.go:530] duration metric: took 1.86228656s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	
	
	==> CRI-O <==
	Dec 02 20:55:38 newest-cni-245604 crio[524]: time="2025-12-02T20:55:38.773275972Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 20:55:38 newest-cni-245604 crio[524]: time="2025-12-02T20:55:38.774480965Z" level=info msg="Ran pod sandbox 7ad5268592801e12ed9dc5d8cdcd8bca2b98140487f37497a5d1392d898ba64e with infra container: kube-system/kindnet-flbpz/POD" id=07460822-f23e-4d71-b785-fd03a091aaaa name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 20:55:38 newest-cni-245604 crio[524]: time="2025-12-02T20:55:38.778700515Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0595cc5a-1975-41c5-b652-10e3e7cb3959 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:38 newest-cni-245604 crio[524]: time="2025-12-02T20:55:38.779979988Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=9b42ed16-5112-42a9-861a-e36355d52db0 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:38 newest-cni-245604 crio[524]: time="2025-12-02T20:55:38.781460816Z" level=info msg="Creating container: kube-system/kindnet-flbpz/kindnet-cni" id=1c7b1b8e-aeb9-4527-b88e-0adf9b062214 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:55:38 newest-cni-245604 crio[524]: time="2025-12-02T20:55:38.781575223Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:38 newest-cni-245604 crio[524]: time="2025-12-02T20:55:38.786259295Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:38 newest-cni-245604 crio[524]: time="2025-12-02T20:55:38.787038249Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:38 newest-cni-245604 crio[524]: time="2025-12-02T20:55:38.820523235Z" level=info msg="Created container 5d356b416e3653870f09b95ab59dd41dd02fd4db8c0ee65696f185f05b58a6f0: kube-system/kindnet-flbpz/kindnet-cni" id=1c7b1b8e-aeb9-4527-b88e-0adf9b062214 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:55:38 newest-cni-245604 crio[524]: time="2025-12-02T20:55:38.82146938Z" level=info msg="Starting container: 5d356b416e3653870f09b95ab59dd41dd02fd4db8c0ee65696f185f05b58a6f0" id=5d81c02e-a25f-4b1c-b1cf-43183de2c9d0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:55:38 newest-cni-245604 crio[524]: time="2025-12-02T20:55:38.823595228Z" level=info msg="Started container" PID=1030 containerID=5d356b416e3653870f09b95ab59dd41dd02fd4db8c0ee65696f185f05b58a6f0 description=kube-system/kindnet-flbpz/kindnet-cni id=5d81c02e-a25f-4b1c-b1cf-43183de2c9d0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ad5268592801e12ed9dc5d8cdcd8bca2b98140487f37497a5d1392d898ba64e
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.664577834Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-khm6s/POD" id=ed711b14-2eb0-42f5-9183-43ff4095ac2f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.664635194Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.667664096Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ed711b14-2eb0-42f5-9183-43ff4095ac2f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.669883311Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.670818237Z" level=info msg="Ran pod sandbox 31b866d5fc4400d6511aa61502e1b0199d427d5126bf277d8347ee62c97adcca with infra container: kube-system/kube-proxy-khm6s/POD" id=ed711b14-2eb0-42f5-9183-43ff4095ac2f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.672317018Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=bf105205-3a91-4d5c-9b37-e29b342f8201 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.673584508Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=fbfd8d40-265a-484a-b53c-9f417f9346a2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.674644957Z" level=info msg="Creating container: kube-system/kube-proxy-khm6s/kube-proxy" id=220e4a4e-7b30-4a40-b3f0-52f4ad53fefe name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.675014741Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.681282364Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.681976863Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.735032204Z" level=info msg="Created container 8af9f47509e18f95362a22d5dbd4df7f0e64f85d8cf23218eed49b6a2fcf50c8: kube-system/kube-proxy-khm6s/kube-proxy" id=220e4a4e-7b30-4a40-b3f0-52f4ad53fefe name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.735831352Z" level=info msg="Starting container: 8af9f47509e18f95362a22d5dbd4df7f0e64f85d8cf23218eed49b6a2fcf50c8" id=7dbb3893-751d-4cf9-86f8-6c2883d35472 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.739736017Z" level=info msg="Started container" PID=1090 containerID=8af9f47509e18f95362a22d5dbd4df7f0e64f85d8cf23218eed49b6a2fcf50c8 description=kube-system/kube-proxy-khm6s/kube-proxy id=7dbb3893-751d-4cf9-86f8-6c2883d35472 name=/runtime.v1.RuntimeService/StartContainer sandboxID=31b866d5fc4400d6511aa61502e1b0199d427d5126bf277d8347ee62c97adcca
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8af9f47509e18       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   3 seconds ago       Running             kube-proxy                1                   31b866d5fc440       kube-proxy-khm6s                            kube-system
	5d356b416e365       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   7ad5268592801       kindnet-flbpz                               kube-system
	299a73dcc2413       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   6 seconds ago       Running             etcd                      1                   ce9e3900caa8d       etcd-newest-cni-245604                      kube-system
	7f956e3ba93eb       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   6 seconds ago       Running             kube-apiserver            1                   234c52e8c097f       kube-apiserver-newest-cni-245604            kube-system
	d1dc95faf60a3       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   6 seconds ago       Running             kube-controller-manager   1                   3a691a385b616       kube-controller-manager-newest-cni-245604   kube-system
	c4e1eb0695344       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   6 seconds ago       Running             kube-scheduler            1                   85f71e10b3a2e       kube-scheduler-newest-cni-245604            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-245604
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-245604
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=newest-cni-245604
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T20_55_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 20:55:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-245604
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:55:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:55:37 +0000   Tue, 02 Dec 2025 20:55:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:55:37 +0000   Tue, 02 Dec 2025 20:55:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:55:37 +0000   Tue, 02 Dec 2025 20:55:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 02 Dec 2025 20:55:37 +0000   Tue, 02 Dec 2025 20:55:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-245604
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                db92b9bd-a8ee-4a01-993b-03f9f3976205
	  Boot ID:                    9dd5456f-e394-4b4b-9458-48d26faf507e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-245604                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         26s
	  kube-system                 kindnet-flbpz                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      21s
	  kube-system                 kube-apiserver-newest-cni-245604             250m (3%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-controller-manager-newest-cni-245604    200m (2%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-proxy-khm6s                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 kube-scheduler-newest-cni-245604             100m (1%)     0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  22s   node-controller  Node newest-cni-245604 event: Registered Node newest-cni-245604 in Controller
	  Normal  RegisteredNode  2s    node-controller  Node newest-cni-245604 event: Registered Node newest-cni-245604 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[ +32.254238] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[Dec 2 20:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 03 bd 14 45 8a 08 06
	[  +0.000590] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 96 27 ad 0d 40 04 08 06
	[Dec 2 20:53] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	[  +0.000700] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 e4 ba c0 78 5f 08 06
	[ +10.119645] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000022] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[  +2.447166] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 df 09 53 d6 6e 08 06
	[  +0.000374] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 8d 06 71 0a 5e 08 06
	[Dec 2 20:54] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 12 47 13 50 f6 bc 08 06
	[  +0.001523] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[ +22.123549] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 0d 45 06 42 2a 08 06
	[  +0.000389] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	
	
	==> etcd [299a73dcc241327fe5cf3f205be0f0fa45b6267d9d291d2b15d27c02c06717cf] <==
	{"level":"warn","ts":"2025-12-02T20:55:37.135000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.145642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.155755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.162852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.170401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.180260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.187567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.194580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.201686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.209971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.220250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.235414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.242784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.249848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.256797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.264205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.270899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.278723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.286566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.298348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.301862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.309423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.317515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.324985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.377972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39876","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:55:43 up  2:38,  0 user,  load average: 6.04, 4.26, 2.69
	Linux newest-cni-245604 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5d356b416e3653870f09b95ab59dd41dd02fd4db8c0ee65696f185f05b58a6f0] <==
	I1202 20:55:39.066305       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 20:55:39.066625       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1202 20:55:39.066817       1 main.go:148] setting mtu 1500 for CNI 
	I1202 20:55:39.066848       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 20:55:39.066863       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T20:55:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 20:55:39.275684       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 20:55:39.275717       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 20:55:39.275731       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 20:55:39.275886       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [7f956e3ba93eb4957689173471e1faef57d87fd2d2ec24476026588c56c69ba2] <==
	I1202 20:55:37.896281       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1202 20:55:37.897882       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 20:55:37.897012       1 aggregator.go:187] initial CRD sync complete...
	I1202 20:55:37.897962       1 autoregister_controller.go:144] Starting autoregister controller
	I1202 20:55:37.897971       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 20:55:37.897978       1 cache.go:39] Caches are synced for autoregister controller
	I1202 20:55:37.897155       1 policy_source.go:248] refreshing policies
	I1202 20:55:37.900909       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:55:37.906262       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1202 20:55:37.915603       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1202 20:55:37.922869       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 20:55:38.013938       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 20:55:38.013938       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 20:55:38.254549       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 20:55:38.316830       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 20:55:38.345057       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 20:55:38.354275       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 20:55:38.418573       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.255.11"}
	I1202 20:55:38.431195       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.138.229"}
	I1202 20:55:38.797510       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1202 20:55:41.489862       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 20:55:41.540216       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 20:55:41.540225       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 20:55:41.591036       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 20:55:41.692483       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [d1dc95faf60a35cdf8dd5e3d023890a6a83f6e6ef58c93949a275bea726c4560] <==
	I1202 20:55:41.061745       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.061794       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.061909       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.061995       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.062115       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.062282       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.062366       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.062495       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.062588       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.062022       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.064766       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.069859       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.069929       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.066880       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.066894       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.062322       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.062593       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.070296       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.076253       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.076289       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.076320       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.152174       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.154350       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.154369       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1202 20:55:41.154374       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [8af9f47509e18f95362a22d5dbd4df7f0e64f85d8cf23218eed49b6a2fcf50c8] <==
	I1202 20:55:39.820352       1 server_linux.go:53] "Using iptables proxy"
	I1202 20:55:39.915874       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 20:55:40.018153       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:40.018204       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1202 20:55:40.018312       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 20:55:40.054515       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 20:55:40.054585       1 server_linux.go:136] "Using iptables Proxier"
	I1202 20:55:40.065846       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 20:55:40.066331       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 20:55:40.066415       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:55:40.081700       1 config.go:200] "Starting service config controller"
	I1202 20:55:40.081726       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 20:55:40.081753       1 config.go:106] "Starting endpoint slice config controller"
	I1202 20:55:40.081759       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 20:55:40.081791       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 20:55:40.081804       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 20:55:40.082727       1 config.go:309] "Starting node config controller"
	I1202 20:55:40.082793       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 20:55:40.082819       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 20:55:40.182803       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 20:55:40.182903       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 20:55:40.183310       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c4e1eb06953444823120ccc3fc5298bbaa5c977cbbf41e594e6b162545a4994c] <==
	I1202 20:55:36.983824       1 serving.go:386] Generated self-signed cert in-memory
	W1202 20:55:37.820782       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 20:55:37.820847       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1202 20:55:37.820861       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 20:55:37.820871       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 20:55:37.882977       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1202 20:55:37.883010       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:55:37.886551       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:55:37.886587       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 20:55:37.886710       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 20:55:37.886844       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 20:55:37.987293       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 02 20:55:37 newest-cni-245604 kubelet[660]: E1202 20:55:37.975795     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-245604\" already exists" pod="kube-system/kube-controller-manager-newest-cni-245604"
	Dec 02 20:55:37 newest-cni-245604 kubelet[660]: I1202 20:55:37.975838     660 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-245604"
	Dec 02 20:55:37 newest-cni-245604 kubelet[660]: E1202 20:55:37.982956     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-245604\" already exists" pod="kube-system/kube-scheduler-newest-cni-245604"
	Dec 02 20:55:37 newest-cni-245604 kubelet[660]: I1202 20:55:37.983160     660 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-245604"
	Dec 02 20:55:37 newest-cni-245604 kubelet[660]: E1202 20:55:37.992001     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-245604\" already exists" pod="kube-system/etcd-newest-cni-245604"
	Dec 02 20:55:37 newest-cni-245604 kubelet[660]: E1202 20:55:37.993049     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-245604\" already exists" pod="kube-system/kube-scheduler-newest-cni-245604"
	Dec 02 20:55:37 newest-cni-245604 kubelet[660]: E1202 20:55:37.993167     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-245604" containerName="kube-scheduler"
	Dec 02 20:55:37 newest-cni-245604 kubelet[660]: E1202 20:55:37.993954     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-245604\" already exists" pod="kube-system/etcd-newest-cni-245604"
	Dec 02 20:55:37 newest-cni-245604 kubelet[660]: E1202 20:55:37.994049     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-245604" containerName="etcd"
	Dec 02 20:55:37 newest-cni-245604 kubelet[660]: E1202 20:55:37.994342     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-245604\" already exists" pod="kube-system/kube-apiserver-newest-cni-245604"
	Dec 02 20:55:37 newest-cni-245604 kubelet[660]: E1202 20:55:37.994420     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-245604" containerName="kube-apiserver"
	Dec 02 20:55:38 newest-cni-245604 kubelet[660]: I1202 20:55:38.010566     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/990486ba-3da5-4666-b441-52e3fcc4c81f-xtables-lock\") pod \"kube-proxy-khm6s\" (UID: \"990486ba-3da5-4666-b441-52e3fcc4c81f\") " pod="kube-system/kube-proxy-khm6s"
	Dec 02 20:55:38 newest-cni-245604 kubelet[660]: I1202 20:55:38.010624     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/990486ba-3da5-4666-b441-52e3fcc4c81f-lib-modules\") pod \"kube-proxy-khm6s\" (UID: \"990486ba-3da5-4666-b441-52e3fcc4c81f\") " pod="kube-system/kube-proxy-khm6s"
	Dec 02 20:55:38 newest-cni-245604 kubelet[660]: I1202 20:55:38.010651     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5931b461-203e-4906-9cb7-0a7ddcf9c5ae-cni-cfg\") pod \"kindnet-flbpz\" (UID: \"5931b461-203e-4906-9cb7-0a7ddcf9c5ae\") " pod="kube-system/kindnet-flbpz"
	Dec 02 20:55:38 newest-cni-245604 kubelet[660]: I1202 20:55:38.010700     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5931b461-203e-4906-9cb7-0a7ddcf9c5ae-xtables-lock\") pod \"kindnet-flbpz\" (UID: \"5931b461-203e-4906-9cb7-0a7ddcf9c5ae\") " pod="kube-system/kindnet-flbpz"
	Dec 02 20:55:38 newest-cni-245604 kubelet[660]: I1202 20:55:38.010730     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5931b461-203e-4906-9cb7-0a7ddcf9c5ae-lib-modules\") pod \"kindnet-flbpz\" (UID: \"5931b461-203e-4906-9cb7-0a7ddcf9c5ae\") " pod="kube-system/kindnet-flbpz"
	Dec 02 20:55:38 newest-cni-245604 kubelet[660]: E1202 20:55:38.885635     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-245604" containerName="kube-scheduler"
	Dec 02 20:55:38 newest-cni-245604 kubelet[660]: E1202 20:55:38.885966     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-245604" containerName="kube-apiserver"
	Dec 02 20:55:38 newest-cni-245604 kubelet[660]: E1202 20:55:38.886146     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-245604" containerName="etcd"
	Dec 02 20:55:39 newest-cni-245604 kubelet[660]: E1202 20:55:39.012326     660 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Dec 02 20:55:39 newest-cni-245604 kubelet[660]: E1202 20:55:39.012463     660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/990486ba-3da5-4666-b441-52e3fcc4c81f-kube-proxy podName:990486ba-3da5-4666-b441-52e3fcc4c81f nodeName:}" failed. No retries permitted until 2025-12-02 20:55:39.512426196 +0000 UTC m=+3.754051965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/990486ba-3da5-4666-b441-52e3fcc4c81f-kube-proxy") pod "kube-proxy-khm6s" (UID: "990486ba-3da5-4666-b441-52e3fcc4c81f") : failed to sync configmap cache: timed out waiting for the condition
	Dec 02 20:55:40 newest-cni-245604 kubelet[660]: E1202 20:55:40.348687     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-245604" containerName="kube-controller-manager"
	Dec 02 20:55:40 newest-cni-245604 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 20:55:40 newest-cni-245604 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 20:55:40 newest-cni-245604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-245604 -n newest-cni-245604
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-245604 -n newest-cni-245604: exit status 2 (384.624545ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-245604 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-blfz2 storage-provisioner dashboard-metrics-scraper-867fb5f87b-vjf2w kubernetes-dashboard-b84665fb8-75cqx
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-245604 describe pod coredns-7d764666f9-blfz2 storage-provisioner dashboard-metrics-scraper-867fb5f87b-vjf2w kubernetes-dashboard-b84665fb8-75cqx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-245604 describe pod coredns-7d764666f9-blfz2 storage-provisioner dashboard-metrics-scraper-867fb5f87b-vjf2w kubernetes-dashboard-b84665fb8-75cqx: exit status 1 (112.604246ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-blfz2" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-vjf2w" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-75cqx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-245604 describe pod coredns-7d764666f9-blfz2 storage-provisioner dashboard-metrics-scraper-867fb5f87b-vjf2w kubernetes-dashboard-b84665fb8-75cqx: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-245604
helpers_test.go:243: (dbg) docker inspect newest-cni-245604:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ae60842ee29fe46d926c1b83a674aa62b502c7f9b5da358f869a46142c7f9e7c",
	        "Created": "2025-12-02T20:54:52.492393664Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 754376,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T20:55:29.610346427Z",
	            "FinishedAt": "2025-12-02T20:55:28.3563002Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/ae60842ee29fe46d926c1b83a674aa62b502c7f9b5da358f869a46142c7f9e7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ae60842ee29fe46d926c1b83a674aa62b502c7f9b5da358f869a46142c7f9e7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/ae60842ee29fe46d926c1b83a674aa62b502c7f9b5da358f869a46142c7f9e7c/hosts",
	        "LogPath": "/var/lib/docker/containers/ae60842ee29fe46d926c1b83a674aa62b502c7f9b5da358f869a46142c7f9e7c/ae60842ee29fe46d926c1b83a674aa62b502c7f9b5da358f869a46142c7f9e7c-json.log",
	        "Name": "/newest-cni-245604",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-245604:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-245604",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ae60842ee29fe46d926c1b83a674aa62b502c7f9b5da358f869a46142c7f9e7c",
	                "LowerDir": "/var/lib/docker/overlay2/cadb92bade23480fadfbab75eef8dd705d24c3d8c95f9fa3a23707e903f6c6b9-init/diff:/var/lib/docker/overlay2/49bb44e987885c48600d5ae7ad3c81cad82a00f38070c0460882c5746d9fae59/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cadb92bade23480fadfbab75eef8dd705d24c3d8c95f9fa3a23707e903f6c6b9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cadb92bade23480fadfbab75eef8dd705d24c3d8c95f9fa3a23707e903f6c6b9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cadb92bade23480fadfbab75eef8dd705d24c3d8c95f9fa3a23707e903f6c6b9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-245604",
	                "Source": "/var/lib/docker/volumes/newest-cni-245604/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-245604",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-245604",
	                "name.minikube.sigs.k8s.io": "newest-cni-245604",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "435c72dd59758f237188c11cbde9f7313579b3d64d10737dd108ede2a9c1a214",
	            "SandboxKey": "/var/run/docker/netns/435c72dd5975",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33498"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33499"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33502"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33500"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33501"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-245604": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "417e9d972863c61faff7f9557de77252152a1c936456e0c9e3a58022e688fea1",
	                    "EndpointID": "0df195722373fe48708fc4895b89159a2f90618eef2ac2cb4f9a87be5c54f5b8",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "72:f5:78:e1:d6:15",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-245604",
	                        "ae60842ee29f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-245604 -n newest-cni-245604
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-245604 -n newest-cni-245604: exit status 2 (458.177989ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-245604 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-245604 logs -n 25: (1.548461329s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ ssh     │ -p bridge-775392 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                     │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo cat /etc/containerd/config.toml                                                                                                                                                                                                │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo containerd config dump                                                                                                                                                                                                         │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                  │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo systemctl cat crio --no-pager                                                                                                                                                                                                  │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                        │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ ssh     │ -p bridge-775392 sudo crio config                                                                                                                                                                                                                    │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-992336 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ delete  │ -p bridge-775392                                                                                                                                                                                                                                     │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ start   │ -p old-k8s-version-992336 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p newest-cni-245604 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable metrics-server -p no-preload-336331 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ stop    │ -p no-preload-336331 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable metrics-server -p newest-cni-245604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-997805 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ stop    │ -p newest-cni-245604 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ stop    │ -p default-k8s-diff-port-997805 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable dashboard -p newest-cni-245604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p newest-cni-245604 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable dashboard -p no-preload-336331 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p no-preload-336331 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ image   │ newest-cni-245604 image list --format=json                                                                                                                                                                                                           │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ pause   │ -p newest-cni-245604 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-997805 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p default-k8s-diff-port-997805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:55:43
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:55:43.623414  759377 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:55:43.623677  759377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:55:43.623687  759377 out.go:374] Setting ErrFile to fd 2...
	I1202 20:55:43.623691  759377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:55:43.623910  759377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:55:43.624363  759377 out.go:368] Setting JSON to false
	I1202 20:55:43.625673  759377 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9488,"bootTime":1764699456,"procs":380,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:55:43.625734  759377 start.go:143] virtualization: kvm guest
	I1202 20:55:43.627854  759377 out.go:179] * [default-k8s-diff-port-997805] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 20:55:43.629469  759377 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:55:43.629508  759377 notify.go:221] Checking for updates...
	I1202 20:55:43.632220  759377 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:55:43.633738  759377 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:43.635277  759377 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 20:55:43.636653  759377 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:55:43.638031  759377 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:55:43.640026  759377 config.go:182] Loaded profile config "default-k8s-diff-port-997805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:55:43.640770  759377 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:55:43.668048  759377 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 20:55:43.668262  759377 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:55:43.734805  759377 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-02 20:55:43.722105512 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:55:43.734912  759377 docker.go:319] overlay module found
	I1202 20:55:43.737925  759377 out.go:179] * Using the docker driver based on existing profile
	I1202 20:55:43.739212  759377 start.go:309] selected driver: docker
	I1202 20:55:43.739235  759377 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-997805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-997805 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:55:43.739387  759377 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:55:43.740334  759377 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:55:43.816404  759377 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-02 20:55:43.803669743 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:55:43.816737  759377 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:55:43.816771  759377 cni.go:84] Creating CNI manager for ""
	I1202 20:55:43.816830  759377 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:43.816871  759377 start.go:353] cluster config:
	{Name:default-k8s-diff-port-997805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-997805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:55:43.818771  759377 out.go:179] * Starting "default-k8s-diff-port-997805" primary control-plane node in "default-k8s-diff-port-997805" cluster
	I1202 20:55:43.823500  759377 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 20:55:43.824982  759377 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 20:55:43.826226  759377 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:43.826260  759377 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 20:55:43.826287  759377 cache.go:65] Caching tarball of preloaded images
	I1202 20:55:43.826356  759377 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 20:55:43.826379  759377 preload.go:238] Found /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 20:55:43.826390  759377 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 20:55:43.826539  759377 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/config.json ...
	I1202 20:55:43.852768  759377 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 20:55:43.852797  759377 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 20:55:43.852814  759377 cache.go:243] Successfully downloaded all kic artifacts
	I1202 20:55:43.852854  759377 start.go:360] acquireMachinesLock for default-k8s-diff-port-997805: {Name:mk4953f04f07f6e42575999e77b919f864a2c0dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:55:43.852940  759377 start.go:364] duration metric: took 58.164µs to acquireMachinesLock for "default-k8s-diff-port-997805"
	I1202 20:55:43.852964  759377 start.go:96] Skipping create...Using existing machine configuration
	I1202 20:55:43.852973  759377 fix.go:54] fixHost starting: 
	I1202 20:55:43.853260  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:43.875706  759377 fix.go:112] recreateIfNeeded on default-k8s-diff-port-997805: state=Stopped err=<nil>
	W1202 20:55:43.875740  759377 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Dec 02 20:55:38 newest-cni-245604 crio[524]: time="2025-12-02T20:55:38.773275972Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 20:55:38 newest-cni-245604 crio[524]: time="2025-12-02T20:55:38.774480965Z" level=info msg="Ran pod sandbox 7ad5268592801e12ed9dc5d8cdcd8bca2b98140487f37497a5d1392d898ba64e with infra container: kube-system/kindnet-flbpz/POD" id=07460822-f23e-4d71-b785-fd03a091aaaa name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 20:55:38 newest-cni-245604 crio[524]: time="2025-12-02T20:55:38.778700515Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0595cc5a-1975-41c5-b652-10e3e7cb3959 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:38 newest-cni-245604 crio[524]: time="2025-12-02T20:55:38.779979988Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=9b42ed16-5112-42a9-861a-e36355d52db0 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:38 newest-cni-245604 crio[524]: time="2025-12-02T20:55:38.781460816Z" level=info msg="Creating container: kube-system/kindnet-flbpz/kindnet-cni" id=1c7b1b8e-aeb9-4527-b88e-0adf9b062214 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:55:38 newest-cni-245604 crio[524]: time="2025-12-02T20:55:38.781575223Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:38 newest-cni-245604 crio[524]: time="2025-12-02T20:55:38.786259295Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:38 newest-cni-245604 crio[524]: time="2025-12-02T20:55:38.787038249Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:38 newest-cni-245604 crio[524]: time="2025-12-02T20:55:38.820523235Z" level=info msg="Created container 5d356b416e3653870f09b95ab59dd41dd02fd4db8c0ee65696f185f05b58a6f0: kube-system/kindnet-flbpz/kindnet-cni" id=1c7b1b8e-aeb9-4527-b88e-0adf9b062214 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:55:38 newest-cni-245604 crio[524]: time="2025-12-02T20:55:38.82146938Z" level=info msg="Starting container: 5d356b416e3653870f09b95ab59dd41dd02fd4db8c0ee65696f185f05b58a6f0" id=5d81c02e-a25f-4b1c-b1cf-43183de2c9d0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:55:38 newest-cni-245604 crio[524]: time="2025-12-02T20:55:38.823595228Z" level=info msg="Started container" PID=1030 containerID=5d356b416e3653870f09b95ab59dd41dd02fd4db8c0ee65696f185f05b58a6f0 description=kube-system/kindnet-flbpz/kindnet-cni id=5d81c02e-a25f-4b1c-b1cf-43183de2c9d0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ad5268592801e12ed9dc5d8cdcd8bca2b98140487f37497a5d1392d898ba64e
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.664577834Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-khm6s/POD" id=ed711b14-2eb0-42f5-9183-43ff4095ac2f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.664635194Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.667664096Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ed711b14-2eb0-42f5-9183-43ff4095ac2f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.669883311Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.670818237Z" level=info msg="Ran pod sandbox 31b866d5fc4400d6511aa61502e1b0199d427d5126bf277d8347ee62c97adcca with infra container: kube-system/kube-proxy-khm6s/POD" id=ed711b14-2eb0-42f5-9183-43ff4095ac2f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.672317018Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=bf105205-3a91-4d5c-9b37-e29b342f8201 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.673584508Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=fbfd8d40-265a-484a-b53c-9f417f9346a2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.674644957Z" level=info msg="Creating container: kube-system/kube-proxy-khm6s/kube-proxy" id=220e4a4e-7b30-4a40-b3f0-52f4ad53fefe name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.675014741Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.681282364Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.681976863Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.735032204Z" level=info msg="Created container 8af9f47509e18f95362a22d5dbd4df7f0e64f85d8cf23218eed49b6a2fcf50c8: kube-system/kube-proxy-khm6s/kube-proxy" id=220e4a4e-7b30-4a40-b3f0-52f4ad53fefe name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.735831352Z" level=info msg="Starting container: 8af9f47509e18f95362a22d5dbd4df7f0e64f85d8cf23218eed49b6a2fcf50c8" id=7dbb3893-751d-4cf9-86f8-6c2883d35472 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:55:39 newest-cni-245604 crio[524]: time="2025-12-02T20:55:39.739736017Z" level=info msg="Started container" PID=1090 containerID=8af9f47509e18f95362a22d5dbd4df7f0e64f85d8cf23218eed49b6a2fcf50c8 description=kube-system/kube-proxy-khm6s/kube-proxy id=7dbb3893-751d-4cf9-86f8-6c2883d35472 name=/runtime.v1.RuntimeService/StartContainer sandboxID=31b866d5fc4400d6511aa61502e1b0199d427d5126bf277d8347ee62c97adcca
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8af9f47509e18       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   5 seconds ago       Running             kube-proxy                1                   31b866d5fc440       kube-proxy-khm6s                            kube-system
	5d356b416e365       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   7ad5268592801       kindnet-flbpz                               kube-system
	299a73dcc2413       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   8 seconds ago       Running             etcd                      1                   ce9e3900caa8d       etcd-newest-cni-245604                      kube-system
	7f956e3ba93eb       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   8 seconds ago       Running             kube-apiserver            1                   234c52e8c097f       kube-apiserver-newest-cni-245604            kube-system
	d1dc95faf60a3       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   8 seconds ago       Running             kube-controller-manager   1                   3a691a385b616       kube-controller-manager-newest-cni-245604   kube-system
	c4e1eb0695344       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   8 seconds ago       Running             kube-scheduler            1                   85f71e10b3a2e       kube-scheduler-newest-cni-245604            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-245604
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-245604
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=newest-cni-245604
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T20_55_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 20:55:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-245604
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:55:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:55:37 +0000   Tue, 02 Dec 2025 20:55:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:55:37 +0000   Tue, 02 Dec 2025 20:55:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:55:37 +0000   Tue, 02 Dec 2025 20:55:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 02 Dec 2025 20:55:37 +0000   Tue, 02 Dec 2025 20:55:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-245604
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                db92b9bd-a8ee-4a01-993b-03f9f3976205
	  Boot ID:                    9dd5456f-e394-4b4b-9458-48d26faf507e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-245604                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         28s
	  kube-system                 kindnet-flbpz                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-newest-cni-245604             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-newest-cni-245604    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-khm6s                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-newest-cni-245604             100m (1%)     0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  24s   node-controller  Node newest-cni-245604 event: Registered Node newest-cni-245604 in Controller
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-245604 event: Registered Node newest-cni-245604 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[ +32.254238] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[Dec 2 20:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 03 bd 14 45 8a 08 06
	[  +0.000590] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 96 27 ad 0d 40 04 08 06
	[Dec 2 20:53] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	[  +0.000700] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 e4 ba c0 78 5f 08 06
	[ +10.119645] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000022] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[  +2.447166] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 df 09 53 d6 6e 08 06
	[  +0.000374] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 8d 06 71 0a 5e 08 06
	[Dec 2 20:54] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 12 47 13 50 f6 bc 08 06
	[  +0.001523] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[ +22.123549] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 0d 45 06 42 2a 08 06
	[  +0.000389] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	
	
	==> etcd [299a73dcc241327fe5cf3f205be0f0fa45b6267d9d291d2b15d27c02c06717cf] <==
	{"level":"warn","ts":"2025-12-02T20:55:37.135000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.145642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.155755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.162852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.170401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.180260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.187567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.194580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.201686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.209971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.220250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.235414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.242784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.249848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.256797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.264205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.270899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.278723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.286566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.298348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.301862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.309423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.317515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.324985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:37.377972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39876","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:55:45 up  2:38,  0 user,  load average: 5.72, 4.23, 2.69
	Linux newest-cni-245604 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5d356b416e3653870f09b95ab59dd41dd02fd4db8c0ee65696f185f05b58a6f0] <==
	I1202 20:55:39.066305       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 20:55:39.066625       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1202 20:55:39.066817       1 main.go:148] setting mtu 1500 for CNI 
	I1202 20:55:39.066848       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 20:55:39.066863       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T20:55:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 20:55:39.275684       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 20:55:39.275717       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 20:55:39.275731       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 20:55:39.275886       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [7f956e3ba93eb4957689173471e1faef57d87fd2d2ec24476026588c56c69ba2] <==
	I1202 20:55:37.896281       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1202 20:55:37.897882       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 20:55:37.897012       1 aggregator.go:187] initial CRD sync complete...
	I1202 20:55:37.897962       1 autoregister_controller.go:144] Starting autoregister controller
	I1202 20:55:37.897971       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 20:55:37.897978       1 cache.go:39] Caches are synced for autoregister controller
	I1202 20:55:37.897155       1 policy_source.go:248] refreshing policies
	I1202 20:55:37.900909       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:55:37.906262       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1202 20:55:37.915603       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1202 20:55:37.922869       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 20:55:38.013938       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 20:55:38.013938       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 20:55:38.254549       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 20:55:38.316830       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 20:55:38.345057       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 20:55:38.354275       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 20:55:38.418573       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.255.11"}
	I1202 20:55:38.431195       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.138.229"}
	I1202 20:55:38.797510       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1202 20:55:41.489862       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 20:55:41.540216       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 20:55:41.540225       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 20:55:41.591036       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 20:55:41.692483       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [d1dc95faf60a35cdf8dd5e3d023890a6a83f6e6ef58c93949a275bea726c4560] <==
	I1202 20:55:41.061745       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.061794       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.061909       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.061995       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.062115       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.062282       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.062366       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.062495       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.062588       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.062022       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.064766       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.069859       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.069929       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.066880       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.066894       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.062322       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.062593       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.070296       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.076253       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.076289       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.076320       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.152174       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.154350       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:41.154369       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1202 20:55:41.154374       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [8af9f47509e18f95362a22d5dbd4df7f0e64f85d8cf23218eed49b6a2fcf50c8] <==
	I1202 20:55:39.820352       1 server_linux.go:53] "Using iptables proxy"
	I1202 20:55:39.915874       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 20:55:40.018153       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:40.018204       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1202 20:55:40.018312       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 20:55:40.054515       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 20:55:40.054585       1 server_linux.go:136] "Using iptables Proxier"
	I1202 20:55:40.065846       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 20:55:40.066331       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 20:55:40.066415       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:55:40.081700       1 config.go:200] "Starting service config controller"
	I1202 20:55:40.081726       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 20:55:40.081753       1 config.go:106] "Starting endpoint slice config controller"
	I1202 20:55:40.081759       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 20:55:40.081791       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 20:55:40.081804       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 20:55:40.082727       1 config.go:309] "Starting node config controller"
	I1202 20:55:40.082793       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 20:55:40.082819       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 20:55:40.182803       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 20:55:40.182903       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 20:55:40.183310       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c4e1eb06953444823120ccc3fc5298bbaa5c977cbbf41e594e6b162545a4994c] <==
	I1202 20:55:36.983824       1 serving.go:386] Generated self-signed cert in-memory
	W1202 20:55:37.820782       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 20:55:37.820847       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1202 20:55:37.820861       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 20:55:37.820871       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 20:55:37.882977       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1202 20:55:37.883010       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:55:37.886551       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:55:37.886587       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 20:55:37.886710       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 20:55:37.886844       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 20:55:37.987293       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 02 20:55:37 newest-cni-245604 kubelet[660]: E1202 20:55:37.975795     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-245604\" already exists" pod="kube-system/kube-controller-manager-newest-cni-245604"
	Dec 02 20:55:37 newest-cni-245604 kubelet[660]: I1202 20:55:37.975838     660 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-245604"
	Dec 02 20:55:37 newest-cni-245604 kubelet[660]: E1202 20:55:37.982956     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-245604\" already exists" pod="kube-system/kube-scheduler-newest-cni-245604"
	Dec 02 20:55:37 newest-cni-245604 kubelet[660]: I1202 20:55:37.983160     660 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-245604"
	Dec 02 20:55:37 newest-cni-245604 kubelet[660]: E1202 20:55:37.992001     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-245604\" already exists" pod="kube-system/etcd-newest-cni-245604"
	Dec 02 20:55:37 newest-cni-245604 kubelet[660]: E1202 20:55:37.993049     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-245604\" already exists" pod="kube-system/kube-scheduler-newest-cni-245604"
	Dec 02 20:55:37 newest-cni-245604 kubelet[660]: E1202 20:55:37.993167     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-245604" containerName="kube-scheduler"
	Dec 02 20:55:37 newest-cni-245604 kubelet[660]: E1202 20:55:37.993954     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-245604\" already exists" pod="kube-system/etcd-newest-cni-245604"
	Dec 02 20:55:37 newest-cni-245604 kubelet[660]: E1202 20:55:37.994049     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-245604" containerName="etcd"
	Dec 02 20:55:37 newest-cni-245604 kubelet[660]: E1202 20:55:37.994342     660 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-245604\" already exists" pod="kube-system/kube-apiserver-newest-cni-245604"
	Dec 02 20:55:37 newest-cni-245604 kubelet[660]: E1202 20:55:37.994420     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-245604" containerName="kube-apiserver"
	Dec 02 20:55:38 newest-cni-245604 kubelet[660]: I1202 20:55:38.010566     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/990486ba-3da5-4666-b441-52e3fcc4c81f-xtables-lock\") pod \"kube-proxy-khm6s\" (UID: \"990486ba-3da5-4666-b441-52e3fcc4c81f\") " pod="kube-system/kube-proxy-khm6s"
	Dec 02 20:55:38 newest-cni-245604 kubelet[660]: I1202 20:55:38.010624     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/990486ba-3da5-4666-b441-52e3fcc4c81f-lib-modules\") pod \"kube-proxy-khm6s\" (UID: \"990486ba-3da5-4666-b441-52e3fcc4c81f\") " pod="kube-system/kube-proxy-khm6s"
	Dec 02 20:55:38 newest-cni-245604 kubelet[660]: I1202 20:55:38.010651     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5931b461-203e-4906-9cb7-0a7ddcf9c5ae-cni-cfg\") pod \"kindnet-flbpz\" (UID: \"5931b461-203e-4906-9cb7-0a7ddcf9c5ae\") " pod="kube-system/kindnet-flbpz"
	Dec 02 20:55:38 newest-cni-245604 kubelet[660]: I1202 20:55:38.010700     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5931b461-203e-4906-9cb7-0a7ddcf9c5ae-xtables-lock\") pod \"kindnet-flbpz\" (UID: \"5931b461-203e-4906-9cb7-0a7ddcf9c5ae\") " pod="kube-system/kindnet-flbpz"
	Dec 02 20:55:38 newest-cni-245604 kubelet[660]: I1202 20:55:38.010730     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5931b461-203e-4906-9cb7-0a7ddcf9c5ae-lib-modules\") pod \"kindnet-flbpz\" (UID: \"5931b461-203e-4906-9cb7-0a7ddcf9c5ae\") " pod="kube-system/kindnet-flbpz"
	Dec 02 20:55:38 newest-cni-245604 kubelet[660]: E1202 20:55:38.885635     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-245604" containerName="kube-scheduler"
	Dec 02 20:55:38 newest-cni-245604 kubelet[660]: E1202 20:55:38.885966     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-245604" containerName="kube-apiserver"
	Dec 02 20:55:38 newest-cni-245604 kubelet[660]: E1202 20:55:38.886146     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-245604" containerName="etcd"
	Dec 02 20:55:39 newest-cni-245604 kubelet[660]: E1202 20:55:39.012326     660 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Dec 02 20:55:39 newest-cni-245604 kubelet[660]: E1202 20:55:39.012463     660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/990486ba-3da5-4666-b441-52e3fcc4c81f-kube-proxy podName:990486ba-3da5-4666-b441-52e3fcc4c81f nodeName:}" failed. No retries permitted until 2025-12-02 20:55:39.512426196 +0000 UTC m=+3.754051965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/990486ba-3da5-4666-b441-52e3fcc4c81f-kube-proxy") pod "kube-proxy-khm6s" (UID: "990486ba-3da5-4666-b441-52e3fcc4c81f") : failed to sync configmap cache: timed out waiting for the condition
	Dec 02 20:55:40 newest-cni-245604 kubelet[660]: E1202 20:55:40.348687     660 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-245604" containerName="kube-controller-manager"
	Dec 02 20:55:40 newest-cni-245604 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 20:55:40 newest-cni-245604 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 20:55:40 newest-cni-245604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-245604 -n newest-cni-245604
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-245604 -n newest-cni-245604: exit status 2 (362.11223ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-245604 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-blfz2 storage-provisioner dashboard-metrics-scraper-867fb5f87b-vjf2w kubernetes-dashboard-b84665fb8-75cqx
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-245604 describe pod coredns-7d764666f9-blfz2 storage-provisioner dashboard-metrics-scraper-867fb5f87b-vjf2w kubernetes-dashboard-b84665fb8-75cqx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-245604 describe pod coredns-7d764666f9-blfz2 storage-provisioner dashboard-metrics-scraper-867fb5f87b-vjf2w kubernetes-dashboard-b84665fb8-75cqx: exit status 1 (64.180511ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-blfz2" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-vjf2w" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-75cqx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-245604 describe pod coredns-7d764666f9-blfz2 storage-provisioner dashboard-metrics-scraper-867fb5f87b-vjf2w kubernetes-dashboard-b84665fb8-75cqx: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (7.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-992336 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-992336 --alsologtostderr -v=1: exit status 80 (2.334003728s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-992336 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 20:55:51.976286  762738 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:55:51.976675  762738 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:55:51.976693  762738 out.go:374] Setting ErrFile to fd 2...
	I1202 20:55:51.976700  762738 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:55:51.977018  762738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:55:51.977385  762738 out.go:368] Setting JSON to false
	I1202 20:55:51.977409  762738 mustload.go:66] Loading cluster: old-k8s-version-992336
	I1202 20:55:51.977959  762738 config.go:182] Loaded profile config "old-k8s-version-992336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1202 20:55:51.978570  762738 cli_runner.go:164] Run: docker container inspect old-k8s-version-992336 --format={{.State.Status}}
	I1202 20:55:52.000806  762738 host.go:66] Checking if "old-k8s-version-992336" exists ...
	I1202 20:55:52.001281  762738 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:55:52.070096  762738 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:93 SystemTime:2025-12-02 20:55:52.058240568 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:55:52.070935  762738 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-992336 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1202 20:55:52.073426  762738 out.go:179] * Pausing node old-k8s-version-992336 ... 
	I1202 20:55:52.075569  762738 host.go:66] Checking if "old-k8s-version-992336" exists ...
	I1202 20:55:52.075943  762738 ssh_runner.go:195] Run: systemctl --version
	I1202 20:55:52.076007  762738 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992336
	I1202 20:55:52.099488  762738 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/old-k8s-version-992336/id_rsa Username:docker}
	I1202 20:55:52.204566  762738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:55:52.227704  762738 pause.go:52] kubelet running: true
	I1202 20:55:52.227961  762738 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 20:55:52.489181  762738 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 20:55:52.489293  762738 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 20:55:52.588301  762738 cri.go:89] found id: "8679dcbceeeaac9a65dd46a7186f9e2f2fffc82bafe92915a3d128519f8498cd"
	I1202 20:55:52.588563  762738 cri.go:89] found id: "a3645c830ae882e91f18ca29697e82834c17cdc1060465378a03c3629aa6ea7f"
	I1202 20:55:52.588584  762738 cri.go:89] found id: "e487bf30c0c3633ded0d035f4b5833104a5b2402f66102aa2b3b20b5d8cc9c68"
	I1202 20:55:52.588624  762738 cri.go:89] found id: "02e2a78839ac1c779849b2870b0581ae1fc0576ba27ee665faee95d4690ff516"
	I1202 20:55:52.588679  762738 cri.go:89] found id: "a8c723adb2c9f209e09041fee9e93fcf992494e43fa7e47890154b25a21288b4"
	I1202 20:55:52.588713  762738 cri.go:89] found id: "b1921b3926c4fba551a94a0ec78b54be832b8754401c93ba491ed82e1b71e6be"
	I1202 20:55:52.588726  762738 cri.go:89] found id: "e1e39d0565d3822bf2f251fdb0e8de5f07938ae3aad30710f3eb435ed8294864"
	I1202 20:55:52.588741  762738 cri.go:89] found id: "b30d0a318021ad78d96505cbec12dab08e463997373813e56adc6e14d585834d"
	I1202 20:55:52.588785  762738 cri.go:89] found id: "670db3462ea1c5beb2d55dfd0859b3df17a3bf33ad117a56693583fcb4ccdd66"
	I1202 20:55:52.588823  762738 cri.go:89] found id: "bf29065d30f2a6e3fbd18c254a02294145f086b26e4171ce8fd09900fd813f1a"
	I1202 20:55:52.588837  762738 cri.go:89] found id: "c6a55f74f0b2c40c941df4d57b1985d9f197f20a64448ec742c7becad69978f4"
	I1202 20:55:52.588879  762738 cri.go:89] found id: ""
	I1202 20:55:52.588955  762738 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 20:55:52.615632  762738 retry.go:31] will retry after 307.696335ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:55:52Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:55:52.924237  762738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:55:52.943737  762738 pause.go:52] kubelet running: false
	I1202 20:55:52.943805  762738 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 20:55:53.180886  762738 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 20:55:53.181055  762738 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 20:55:53.283879  762738 cri.go:89] found id: "8679dcbceeeaac9a65dd46a7186f9e2f2fffc82bafe92915a3d128519f8498cd"
	I1202 20:55:53.283913  762738 cri.go:89] found id: "a3645c830ae882e91f18ca29697e82834c17cdc1060465378a03c3629aa6ea7f"
	I1202 20:55:53.283920  762738 cri.go:89] found id: "e487bf30c0c3633ded0d035f4b5833104a5b2402f66102aa2b3b20b5d8cc9c68"
	I1202 20:55:53.283925  762738 cri.go:89] found id: "02e2a78839ac1c779849b2870b0581ae1fc0576ba27ee665faee95d4690ff516"
	I1202 20:55:53.283929  762738 cri.go:89] found id: "a8c723adb2c9f209e09041fee9e93fcf992494e43fa7e47890154b25a21288b4"
	I1202 20:55:53.283935  762738 cri.go:89] found id: "b1921b3926c4fba551a94a0ec78b54be832b8754401c93ba491ed82e1b71e6be"
	I1202 20:55:53.283939  762738 cri.go:89] found id: "e1e39d0565d3822bf2f251fdb0e8de5f07938ae3aad30710f3eb435ed8294864"
	I1202 20:55:53.283943  762738 cri.go:89] found id: "b30d0a318021ad78d96505cbec12dab08e463997373813e56adc6e14d585834d"
	I1202 20:55:53.283947  762738 cri.go:89] found id: "670db3462ea1c5beb2d55dfd0859b3df17a3bf33ad117a56693583fcb4ccdd66"
	I1202 20:55:53.283958  762738 cri.go:89] found id: "bf29065d30f2a6e3fbd18c254a02294145f086b26e4171ce8fd09900fd813f1a"
	I1202 20:55:53.283962  762738 cri.go:89] found id: "c6a55f74f0b2c40c941df4d57b1985d9f197f20a64448ec742c7becad69978f4"
	I1202 20:55:53.283967  762738 cri.go:89] found id: ""
	I1202 20:55:53.284023  762738 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 20:55:53.300601  762738 retry.go:31] will retry after 360.519466ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:55:53Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:55:53.662302  762738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:55:53.678058  762738 pause.go:52] kubelet running: false
	I1202 20:55:53.678135  762738 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 20:55:53.859866  762738 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 20:55:53.860030  762738 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 20:55:53.946475  762738 cri.go:89] found id: "8679dcbceeeaac9a65dd46a7186f9e2f2fffc82bafe92915a3d128519f8498cd"
	I1202 20:55:53.946509  762738 cri.go:89] found id: "a3645c830ae882e91f18ca29697e82834c17cdc1060465378a03c3629aa6ea7f"
	I1202 20:55:53.946517  762738 cri.go:89] found id: "e487bf30c0c3633ded0d035f4b5833104a5b2402f66102aa2b3b20b5d8cc9c68"
	I1202 20:55:53.946522  762738 cri.go:89] found id: "02e2a78839ac1c779849b2870b0581ae1fc0576ba27ee665faee95d4690ff516"
	I1202 20:55:53.946527  762738 cri.go:89] found id: "a8c723adb2c9f209e09041fee9e93fcf992494e43fa7e47890154b25a21288b4"
	I1202 20:55:53.946532  762738 cri.go:89] found id: "b1921b3926c4fba551a94a0ec78b54be832b8754401c93ba491ed82e1b71e6be"
	I1202 20:55:53.946536  762738 cri.go:89] found id: "e1e39d0565d3822bf2f251fdb0e8de5f07938ae3aad30710f3eb435ed8294864"
	I1202 20:55:53.946541  762738 cri.go:89] found id: "b30d0a318021ad78d96505cbec12dab08e463997373813e56adc6e14d585834d"
	I1202 20:55:53.946546  762738 cri.go:89] found id: "670db3462ea1c5beb2d55dfd0859b3df17a3bf33ad117a56693583fcb4ccdd66"
	I1202 20:55:53.946555  762738 cri.go:89] found id: "bf29065d30f2a6e3fbd18c254a02294145f086b26e4171ce8fd09900fd813f1a"
	I1202 20:55:53.946560  762738 cri.go:89] found id: "c6a55f74f0b2c40c941df4d57b1985d9f197f20a64448ec742c7becad69978f4"
	I1202 20:55:53.946566  762738 cri.go:89] found id: ""
	I1202 20:55:53.946631  762738 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 20:55:54.043455  762738 out.go:203] 
	W1202 20:55:54.069153  762738 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:55:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:55:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 20:55:54.069193  762738 out.go:285] * 
	* 
	W1202 20:55:54.077957  762738 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 20:55:54.149873  762738 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-992336 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-992336
helpers_test.go:243: (dbg) docker inspect old-k8s-version-992336:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "51827f72c809ddc8017131ab6c734b0218cc31e7e1a61761312a7eb2f62a2f62",
	        "Created": "2025-12-02T20:53:31.91066414Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 743874,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T20:54:48.947196331Z",
	            "FinishedAt": "2025-12-02T20:54:47.675219839Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/51827f72c809ddc8017131ab6c734b0218cc31e7e1a61761312a7eb2f62a2f62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/51827f72c809ddc8017131ab6c734b0218cc31e7e1a61761312a7eb2f62a2f62/hostname",
	        "HostsPath": "/var/lib/docker/containers/51827f72c809ddc8017131ab6c734b0218cc31e7e1a61761312a7eb2f62a2f62/hosts",
	        "LogPath": "/var/lib/docker/containers/51827f72c809ddc8017131ab6c734b0218cc31e7e1a61761312a7eb2f62a2f62/51827f72c809ddc8017131ab6c734b0218cc31e7e1a61761312a7eb2f62a2f62-json.log",
	        "Name": "/old-k8s-version-992336",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-992336:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-992336",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "51827f72c809ddc8017131ab6c734b0218cc31e7e1a61761312a7eb2f62a2f62",
	                "LowerDir": "/var/lib/docker/overlay2/7c0073ae68bbddb0c31d7b4a3575e90065e1d78fb046473d890be499fbc620c1-init/diff:/var/lib/docker/overlay2/49bb44e987885c48600d5ae7ad3c81cad82a00f38070c0460882c5746d9fae59/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c0073ae68bbddb0c31d7b4a3575e90065e1d78fb046473d890be499fbc620c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c0073ae68bbddb0c31d7b4a3575e90065e1d78fb046473d890be499fbc620c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c0073ae68bbddb0c31d7b4a3575e90065e1d78fb046473d890be499fbc620c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-992336",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-992336/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-992336",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-992336",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-992336",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a32ddb34a7a900412afdc026fb6d83dfc12bb96840f2a3c82e24de2c0302f42e",
	            "SandboxKey": "/var/run/docker/netns/a32ddb34a7a9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33488"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33489"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33492"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33490"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33491"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-992336": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "65ab470fa0e2676960773427a71fe76968e07b9da2ef303b86ef95d30a18b6c4",
	                    "EndpointID": "2f8873c719fdce5045c92ebeb907a3634506a61e6e30f294e387646770ead96c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "7e:f7:a6:e4:f5:b0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-992336",
	                        "51827f72c809"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-992336 -n old-k8s-version-992336
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-992336 -n old-k8s-version-992336: exit status 2 (400.152334ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-992336 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-992336 logs -n 25: (1.880217533s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ ssh     │ -p bridge-775392 sudo crio config                                                                                                                                                                                                                    │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-992336 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ delete  │ -p bridge-775392                                                                                                                                                                                                                                     │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ start   │ -p old-k8s-version-992336 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p newest-cni-245604 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable metrics-server -p no-preload-336331 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ stop    │ -p no-preload-336331 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable metrics-server -p newest-cni-245604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-997805 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ stop    │ -p newest-cni-245604 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ stop    │ -p default-k8s-diff-port-997805 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable dashboard -p newest-cni-245604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p newest-cni-245604 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable dashboard -p no-preload-336331 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p no-preload-336331 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ image   │ newest-cni-245604 image list --format=json                                                                                                                                                                                                           │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ pause   │ -p newest-cni-245604 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-997805 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p default-k8s-diff-port-997805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ delete  │ -p newest-cni-245604                                                                                                                                                                                                                                 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ delete  │ -p newest-cni-245604                                                                                                                                                                                                                                 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ delete  │ -p disable-driver-mounts-234978                                                                                                                                                                                                                      │ disable-driver-mounts-234978 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p embed-certs-386191 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ image   │ old-k8s-version-992336 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ pause   │ -p old-k8s-version-992336 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:55:49
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:55:49.973376  761851 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:55:49.973479  761851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:55:49.973486  761851 out.go:374] Setting ErrFile to fd 2...
	I1202 20:55:49.973492  761851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:55:49.973784  761851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:55:49.974402  761851 out.go:368] Setting JSON to false
	I1202 20:55:49.976053  761851 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9494,"bootTime":1764699456,"procs":379,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:55:49.976153  761851 start.go:143] virtualization: kvm guest
	I1202 20:55:49.979903  761851 out.go:179] * [embed-certs-386191] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 20:55:49.981563  761851 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:55:49.981711  761851 notify.go:221] Checking for updates...
	I1202 20:55:49.985961  761851 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:55:49.989444  761851 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:49.990856  761851 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 20:55:49.992198  761851 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:55:49.994165  761851 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:55:49.996734  761851 config.go:182] Loaded profile config "default-k8s-diff-port-997805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:55:49.996944  761851 config.go:182] Loaded profile config "no-preload-336331": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:55:49.997173  761851 config.go:182] Loaded profile config "old-k8s-version-992336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1202 20:55:49.997373  761851 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:55:50.033364  761851 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 20:55:50.033467  761851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:55:50.114622  761851 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 20:55:50.101227741 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:55:50.114779  761851 docker.go:319] overlay module found
	I1202 20:55:50.117537  761851 out.go:179] * Using the docker driver based on user configuration
	I1202 20:55:50.119145  761851 start.go:309] selected driver: docker
	I1202 20:55:50.119167  761851 start.go:927] validating driver "docker" against <nil>
	I1202 20:55:50.119183  761851 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:55:50.120035  761851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:55:50.211212  761851 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 20:55:50.198488456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:55:50.211445  761851 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 20:55:50.211790  761851 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:55:50.214433  761851 out.go:179] * Using Docker driver with root privileges
	I1202 20:55:50.218243  761851 cni.go:84] Creating CNI manager for ""
	I1202 20:55:50.218353  761851 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:50.218375  761851 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 20:55:50.218508  761851 start.go:353] cluster config:
	{Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:55:50.220045  761851 out.go:179] * Starting "embed-certs-386191" primary control-plane node in "embed-certs-386191" cluster
	I1202 20:55:50.221707  761851 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 20:55:50.223105  761851 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 20:55:50.224334  761851 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:50.224383  761851 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 20:55:50.224379  761851 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 20:55:50.224423  761851 cache.go:65] Caching tarball of preloaded images
	I1202 20:55:50.224531  761851 preload.go:238] Found /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 20:55:50.224544  761851 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 20:55:50.224682  761851 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/config.json ...
	I1202 20:55:50.224706  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/config.json: {Name:mk4df57c1427e88de36c6d265cf4b7b9447ba4a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:50.254982  761851 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 20:55:50.255008  761851 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 20:55:50.255030  761851 cache.go:243] Successfully downloaded all kic artifacts
	I1202 20:55:50.255092  761851 start.go:360] acquireMachinesLock for embed-certs-386191: {Name:mk07b451c8d7193712ed79603183bf03b141f2ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:55:50.255209  761851 start.go:364] duration metric: took 90.207µs to acquireMachinesLock for "embed-certs-386191"
	I1202 20:55:50.255244  761851 start.go:93] Provisioning new machine with config: &{Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:55:50.255372  761851 start.go:125] createHost starting for "" (driver="docker")
	W1202 20:55:47.478474  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:55:49.480219  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:55:48.658867  759377 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:55:48.658893  759377 machine.go:97] duration metric: took 4.363922202s to provisionDockerMachine
	I1202 20:55:48.658908  759377 start.go:293] postStartSetup for "default-k8s-diff-port-997805" (driver="docker")
	I1202 20:55:48.659934  759377 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:55:48.660266  759377 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:55:48.660319  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:48.684270  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:48.800470  759377 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:55:48.806594  759377 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:55:48.806641  759377 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:55:48.806659  759377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 20:55:48.806723  759377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 20:55:48.806832  759377 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem -> 4110322.pem in /etc/ssl/certs
	I1202 20:55:48.807095  759377 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:55:48.817526  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:55:48.843728  759377 start.go:296] duration metric: took 183.799228ms for postStartSetup
	I1202 20:55:48.843844  759377 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:55:48.843886  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:48.867562  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:48.976679  759377 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:55:48.983737  759377 fix.go:56] duration metric: took 5.130755935s for fixHost
	I1202 20:55:48.983779  759377 start.go:83] releasing machines lock for "default-k8s-diff-port-997805", held for 5.130814844s
	I1202 20:55:48.983853  759377 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-997805
	I1202 20:55:49.008951  759377 ssh_runner.go:195] Run: cat /version.json
	I1202 20:55:49.009046  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:49.009048  759377 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:55:49.009136  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:49.034693  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:49.035313  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:49.217584  759377 ssh_runner.go:195] Run: systemctl --version
	I1202 20:55:49.226948  759377 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:55:49.280525  759377 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:55:49.287579  759377 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:55:49.287663  759377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:55:49.299593  759377 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 20:55:49.299624  759377 start.go:496] detecting cgroup driver to use...
	I1202 20:55:49.299667  759377 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 20:55:49.299717  759377 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:55:49.321346  759377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:55:49.340202  759377 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:55:49.340276  759377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:55:49.364580  759377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:55:49.384570  759377 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:55:49.507838  759377 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:55:49.636982  759377 docker.go:234] disabling docker service ...
	I1202 20:55:49.637124  759377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:55:49.660429  759377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:55:49.676580  759377 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:55:49.805919  759377 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:55:49.932552  759377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:55:49.950808  759377 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:55:49.973269  759377 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:55:49.973378  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:49.987382  759377 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 20:55:49.987446  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.001518  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.015622  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.029383  759377 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:55:50.042396  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.055622  759377 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.069706  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.082027  759377 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:55:50.093878  759377 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:55:50.106172  759377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:50.241651  759377 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:55:51.093615  759377 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:55:51.093712  759377 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:55:51.098803  759377 start.go:564] Will wait 60s for crictl version
	I1202 20:55:51.098893  759377 ssh_runner.go:195] Run: which crictl
	I1202 20:55:51.103616  759377 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:55:51.134275  759377 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:55:51.134365  759377 ssh_runner.go:195] Run: crio --version
	I1202 20:55:51.176508  759377 ssh_runner.go:195] Run: crio --version
	I1202 20:55:51.212619  759377 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 20:55:51.213954  759377 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-997805 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:55:51.239456  759377 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 20:55:51.247008  759377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:51.258836  759377 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-997805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-997805 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:55:51.259035  759377 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:51.259113  759377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:51.305184  759377 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:51.305211  759377 crio.go:433] Images already preloaded, skipping extraction
	I1202 20:55:51.305279  759377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:51.336679  759377 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:51.336721  759377 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:55:51.336736  759377 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1202 20:55:51.336850  759377 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-997805 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-997805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:55:51.336915  759377 ssh_runner.go:195] Run: crio config
	I1202 20:55:51.395485  759377 cni.go:84] Creating CNI manager for ""
	I1202 20:55:51.395526  759377 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:51.395553  759377 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:55:51.395590  759377 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-997805 NodeName:default-k8s-diff-port-997805 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:55:51.395786  759377 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-997805"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:55:51.395870  759377 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 20:55:51.406735  759377 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:55:51.406822  759377 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:55:51.416228  759377 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1202 20:55:51.430748  759377 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:55:51.448244  759377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1202 20:55:51.463482  759377 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 20:55:51.467906  759377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:51.480393  759377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:51.588830  759377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:55:51.618253  759377 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805 for IP: 192.168.85.2
	I1202 20:55:51.618282  759377 certs.go:195] generating shared ca certs ...
	I1202 20:55:51.618303  759377 certs.go:227] acquiring lock for ca certs: {Name:mkea11924193dd4dc459e16d70207bde383196df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:51.618470  759377 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key
	I1202 20:55:51.618534  759377 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key
	I1202 20:55:51.618547  759377 certs.go:257] generating profile certs ...
	I1202 20:55:51.618661  759377 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/client.key
	I1202 20:55:51.618759  759377 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/apiserver.key.36ffc693
	I1202 20:55:51.618817  759377 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/proxy-client.key
	I1202 20:55:51.618958  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem (1338 bytes)
	W1202 20:55:51.619000  759377 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032_empty.pem, impossibly tiny 0 bytes
	I1202 20:55:51.619010  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 20:55:51.619043  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:55:51.619087  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:55:51.619120  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem (1675 bytes)
	I1202 20:55:51.619173  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:55:51.619958  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:55:51.642775  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:55:51.668086  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:55:51.695111  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:55:51.723055  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 20:55:51.757108  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 20:55:51.782582  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:55:51.803028  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 20:55:51.823897  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:55:51.845621  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem --> /usr/share/ca-certificates/411032.pem (1338 bytes)
	I1202 20:55:51.866855  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /usr/share/ca-certificates/4110322.pem (1708 bytes)
	I1202 20:55:51.890515  759377 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:55:51.906355  759377 ssh_runner.go:195] Run: openssl version
	I1202 20:55:51.914259  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110322.pem && ln -fs /usr/share/ca-certificates/4110322.pem /etc/ssl/certs/4110322.pem"
	I1202 20:55:51.925148  759377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110322.pem
	I1202 20:55:51.929800  759377 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 20:13 /usr/share/ca-certificates/4110322.pem
	I1202 20:55:51.929869  759377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110322.pem
	I1202 20:55:51.972279  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4110322.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:55:51.983418  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:55:51.993784  759377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:51.999249  759377 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:51.999316  759377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:52.049373  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:55:52.061515  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/411032.pem && ln -fs /usr/share/ca-certificates/411032.pem /etc/ssl/certs/411032.pem"
	I1202 20:55:52.072126  759377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/411032.pem
	I1202 20:55:52.076862  759377 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 20:13 /usr/share/ca-certificates/411032.pem
	I1202 20:55:52.076956  759377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/411032.pem
	I1202 20:55:52.126642  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/411032.pem /etc/ssl/certs/51391683.0"
	I1202 20:55:52.138458  759377 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:55:52.143543  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 20:55:52.198225  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 20:55:52.254754  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 20:55:52.319722  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 20:55:52.380903  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 20:55:52.422910  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 20:55:52.483325  759377 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-997805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-997805 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:55:52.483438  759377 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:55:52.483499  759377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:55:52.522620  759377 cri.go:89] found id: "25e14e8feafb6c0d6c5261cd5e507b812e39fcb9c7e196408fe69d780ebbcd1d"
	I1202 20:55:52.522651  759377 cri.go:89] found id: "0c7e2844e2dbdbf5b9ffe8bf4e8d07304b64b059e3d4c965c2010c5d8a39c499"
	I1202 20:55:52.522657  759377 cri.go:89] found id: "81b0ec87511a05a7501d98eb27c52f69372a4b30c4ea523db262c140f9b68cd3"
	I1202 20:55:52.522662  759377 cri.go:89] found id: "e13e6c4d6c5da602ac2e1402a7612205c5a0ceffdccf7618da3035e562a7d9d3"
	I1202 20:55:52.522667  759377 cri.go:89] found id: ""
	I1202 20:55:52.522718  759377 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 20:55:52.539274  759377 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:55:52Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:55:52.539358  759377 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:55:52.550759  759377 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 20:55:52.550911  759377 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 20:55:52.550977  759377 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 20:55:52.562444  759377 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:55:52.563380  759377 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-997805" does not appear in /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:52.563867  759377 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-407427/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-997805" cluster setting kubeconfig missing "default-k8s-diff-port-997805" context setting]
	I1202 20:55:52.564708  759377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:52.567122  759377 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 20:55:52.580423  759377 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1202 20:55:52.580475  759377 kubeadm.go:602] duration metric: took 29.545337ms to restartPrimaryControlPlane
	I1202 20:55:52.580492  759377 kubeadm.go:403] duration metric: took 97.179033ms to StartCluster
	I1202 20:55:52.580515  759377 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:52.580624  759377 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:52.582395  759377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:52.582737  759377 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:55:52.582982  759377 config.go:182] Loaded profile config "default-k8s-diff-port-997805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:55:52.583044  759377 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:55:52.583145  759377 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:52.583167  759377 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-997805"
	W1202 20:55:52.583180  759377 addons.go:248] addon storage-provisioner should already be in state true
	I1202 20:55:52.583208  759377 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:52.583706  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.583924  759377 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:52.583949  759377 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-997805"
	W1202 20:55:52.583958  759377 addons.go:248] addon dashboard should already be in state true
	I1202 20:55:52.583987  759377 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:52.584470  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.584621  759377 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:52.584638  759377 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-997805"
	I1202 20:55:52.584909  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.590138  759377 out.go:179] * Verifying Kubernetes components...
	I1202 20:55:52.591985  759377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:52.621520  759377 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-997805"
	W1202 20:55:52.621550  759377 addons.go:248] addon default-storageclass should already be in state true
	I1202 20:55:52.621581  759377 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:52.621962  759377 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 20:55:52.621973  759377 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:55:52.622100  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.623522  759377 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:52.623542  759377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:55:52.623861  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:52.629794  759377 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 20:55:52.631326  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 20:55:52.631354  759377 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 20:55:52.631441  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:52.650454  759377 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:52.650440  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:52.650477  759377 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:55:52.650539  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:52.664697  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:52.687593  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:52.782783  759377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:55:52.788136  759377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:52.796186  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 20:55:52.796227  759377 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 20:55:52.805245  759377 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-997805" to be "Ready" ...
	I1202 20:55:52.813493  759377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:52.816061  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 20:55:52.816120  759377 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 20:55:52.836609  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 20:55:52.836641  759377 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 20:55:52.858664  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 20:55:52.858695  759377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 20:55:52.881817  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 20:55:52.881850  759377 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 20:55:52.898249  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 20:55:52.898282  759377 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 20:55:52.916317  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 20:55:52.916341  759377 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 20:55:52.934311  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 20:55:52.934421  759377 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 20:55:52.954130  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:55:52.954156  759377 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 20:55:52.971994  759377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:55:50.259730  761851 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1202 20:55:50.260957  761851 start.go:159] libmachine.API.Create for "embed-certs-386191" (driver="docker")
	I1202 20:55:50.261018  761851 client.go:173] LocalClient.Create starting
	I1202 20:55:50.261131  761851 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem
	I1202 20:55:50.261175  761851 main.go:143] libmachine: Decoding PEM data...
	I1202 20:55:50.261199  761851 main.go:143] libmachine: Parsing certificate...
	I1202 20:55:50.261293  761851 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem
	I1202 20:55:50.261321  761851 main.go:143] libmachine: Decoding PEM data...
	I1202 20:55:50.261336  761851 main.go:143] libmachine: Parsing certificate...
	I1202 20:55:50.261828  761851 cli_runner.go:164] Run: docker network inspect embed-certs-386191 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 20:55:50.287353  761851 cli_runner.go:211] docker network inspect embed-certs-386191 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 20:55:50.287436  761851 network_create.go:284] running [docker network inspect embed-certs-386191] to gather additional debugging logs...
	I1202 20:55:50.287467  761851 cli_runner.go:164] Run: docker network inspect embed-certs-386191
	W1202 20:55:50.313420  761851 cli_runner.go:211] docker network inspect embed-certs-386191 returned with exit code 1
	I1202 20:55:50.313458  761851 network_create.go:287] error running [docker network inspect embed-certs-386191]: docker network inspect embed-certs-386191: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-386191 not found
	I1202 20:55:50.313493  761851 network_create.go:289] output of [docker network inspect embed-certs-386191]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-386191 not found
	
	** /stderr **
	I1202 20:55:50.313695  761851 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:55:50.339597  761851 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-acf081edf266 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:04:c0:60:47:62} reservation:<nil>}
	I1202 20:55:50.340759  761851 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9623a21fb225 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:fc:8b:40:15:1b} reservation:<nil>}
	I1202 20:55:50.341559  761851 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f2b79e7e26a5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:c7:f4:38:1c:32} reservation:<nil>}
	I1202 20:55:50.342581  761851 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-be4fb772701b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:87:5f:38:96:b7} reservation:<nil>}
	I1202 20:55:50.343861  761851 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-13fe483902b9 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:a2:a4:21:b2:62:5a} reservation:<nil>}
	I1202 20:55:50.344785  761851 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-65ab470fa0e2 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:16:23:28:7c:c5:24} reservation:<nil>}
	I1202 20:55:50.346012  761851 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fb66a0}
	I1202 20:55:50.346044  761851 network_create.go:124] attempt to create docker network embed-certs-386191 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1202 20:55:50.346142  761851 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-386191 embed-certs-386191
	I1202 20:55:50.449757  761851 network_create.go:108] docker network embed-certs-386191 192.168.103.0/24 created
	I1202 20:55:50.449812  761851 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-386191" container
	I1202 20:55:50.449912  761851 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 20:55:50.476319  761851 cli_runner.go:164] Run: docker volume create embed-certs-386191 --label name.minikube.sigs.k8s.io=embed-certs-386191 --label created_by.minikube.sigs.k8s.io=true
	I1202 20:55:50.544287  761851 oci.go:103] Successfully created a docker volume embed-certs-386191
	I1202 20:55:50.544384  761851 cli_runner.go:164] Run: docker run --rm --name embed-certs-386191-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-386191 --entrypoint /usr/bin/test -v embed-certs-386191:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 20:55:51.390297  761851 oci.go:107] Successfully prepared a docker volume embed-certs-386191
	I1202 20:55:51.390398  761851 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:51.390416  761851 kic.go:194] Starting extracting preloaded images to volume ...
	I1202 20:55:51.390490  761851 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-386191:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Dec 02 20:55:19 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:19.086993729Z" level=info msg="Started container" PID=1729 containerID=7d39d0d64f96064ac67f49d7b291ffc6a723235728102accde7c1367e964cd5e description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97/dashboard-metrics-scraper id=dfa3ce6d-a0b3-4f3a-92ac-738e1a066bb7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=24705c97e6c39f3ff4d1d03505e9797f29fdafe19cf88d85eeb62f7fb58e596d
	Dec 02 20:55:20 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:20.046480414Z" level=info msg="Removing container: c762d3dee3ddbc6677eac7a72488f6df925fbf49ff834d86b05f612d395c131f" id=83795fcf-7033-4a1e-a226-43425c890dfc name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 20:55:20 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:20.06019327Z" level=info msg="Removed container c762d3dee3ddbc6677eac7a72488f6df925fbf49ff834d86b05f612d395c131f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97/dashboard-metrics-scraper" id=83795fcf-7033-4a1e-a226-43425c890dfc name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 20:55:31 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:31.073959063Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f658b202-219e-4577-b41a-0c0041256cf2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:31 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:31.075107174Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=531641bd-b5ab-479e-beab-9d7fd652f0f9 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:31 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:31.076194618Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=230a7d83-5dcb-4ac9-a680-c52bd33c3cba name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:55:31 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:31.076318733Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:31 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:31.0805507Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:31 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:31.080697597Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/bf0e32b250acb23a0a2d3be066c74c8a8b6d614fe7ef4bf25bc12e1935332df0/merged/etc/passwd: no such file or directory"
	Dec 02 20:55:31 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:31.08071913Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/bf0e32b250acb23a0a2d3be066c74c8a8b6d614fe7ef4bf25bc12e1935332df0/merged/etc/group: no such file or directory"
	Dec 02 20:55:31 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:31.0809277Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:31 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:31.10723906Z" level=info msg="Created container 8679dcbceeeaac9a65dd46a7186f9e2f2fffc82bafe92915a3d128519f8498cd: kube-system/storage-provisioner/storage-provisioner" id=230a7d83-5dcb-4ac9-a680-c52bd33c3cba name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:55:31 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:31.107909095Z" level=info msg="Starting container: 8679dcbceeeaac9a65dd46a7186f9e2f2fffc82bafe92915a3d128519f8498cd" id=af9f671b-a609-48d4-8325-49ba54ceb753 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:55:31 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:31.109792084Z" level=info msg="Started container" PID=1743 containerID=8679dcbceeeaac9a65dd46a7186f9e2f2fffc82bafe92915a3d128519f8498cd description=kube-system/storage-provisioner/storage-provisioner id=af9f671b-a609-48d4-8325-49ba54ceb753 name=/runtime.v1.RuntimeService/StartContainer sandboxID=861e16b68f1a92ea9f025a3342a4d215364187c0293ce4f9c2f00c075b15b465
	Dec 02 20:55:33 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:33.895913271Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=dd417e9e-c822-4298-afd0-3bb980bc9fe7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:33 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:33.896897842Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=64706d65-529c-4898-b37b-778ac915c582 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:33 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:33.897942313Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97/dashboard-metrics-scraper" id=62e78e8b-25d7-469b-b594-2b8805c77aaa name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:55:33 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:33.898149573Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:33 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:33.903941275Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:33 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:33.904443849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:33 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:33.933280446Z" level=info msg="Created container bf29065d30f2a6e3fbd18c254a02294145f086b26e4171ce8fd09900fd813f1a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97/dashboard-metrics-scraper" id=62e78e8b-25d7-469b-b594-2b8805c77aaa name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:55:33 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:33.934006401Z" level=info msg="Starting container: bf29065d30f2a6e3fbd18c254a02294145f086b26e4171ce8fd09900fd813f1a" id=58f51271-c0f2-4226-83a6-c60dd5eb3c1e name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:55:33 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:33.9359624Z" level=info msg="Started container" PID=1759 containerID=bf29065d30f2a6e3fbd18c254a02294145f086b26e4171ce8fd09900fd813f1a description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97/dashboard-metrics-scraper id=58f51271-c0f2-4226-83a6-c60dd5eb3c1e name=/runtime.v1.RuntimeService/StartContainer sandboxID=24705c97e6c39f3ff4d1d03505e9797f29fdafe19cf88d85eeb62f7fb58e596d
	Dec 02 20:55:34 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:34.085786373Z" level=info msg="Removing container: 7d39d0d64f96064ac67f49d7b291ffc6a723235728102accde7c1367e964cd5e" id=7ded94f2-bd1a-405a-bd4e-877d7197a587 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 20:55:34 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:34.095465194Z" level=info msg="Removed container 7d39d0d64f96064ac67f49d7b291ffc6a723235728102accde7c1367e964cd5e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97/dashboard-metrics-scraper" id=7ded94f2-bd1a-405a-bd4e-877d7197a587 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	bf29065d30f2a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   24705c97e6c39       dashboard-metrics-scraper-5f989dc9cf-jns97       kubernetes-dashboard
	8679dcbceeeaa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   861e16b68f1a9       storage-provisioner                              kube-system
	c6a55f74f0b2c       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   39 seconds ago      Running             kubernetes-dashboard        0                   d23e0cd2b02ca       kubernetes-dashboard-8694d4445c-kjcfm            kubernetes-dashboard
	a3645c830ae88       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           55 seconds ago      Running             coredns                     0                   18f5d9221f542       coredns-5dd5756b68-ptzsf                         kube-system
	e55eee3e8fc34       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   a855f6a46899f       busybox                                          default
	e487bf30c0c36       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   861e16b68f1a9       storage-provisioner                              kube-system
	02e2a78839ac1       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           55 seconds ago      Running             kube-proxy                  0                   3db0ce9c23af4       kube-proxy-qpzt8                                 kube-system
	a8c723adb2c9f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   f8b2468643e5a       kindnet-jvmsp                                    kube-system
	b1921b3926c4f       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           59 seconds ago      Running             kube-controller-manager     0                   b1ab5cad3e79d       kube-controller-manager-old-k8s-version-992336   kube-system
	e1e39d0565d38       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           59 seconds ago      Running             etcd                        0                   e33d85da64f19       etcd-old-k8s-version-992336                      kube-system
	b30d0a318021a       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           59 seconds ago      Running             kube-apiserver              0                   883565e7c9c58       kube-apiserver-old-k8s-version-992336            kube-system
	670db3462ea1c       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           59 seconds ago      Running             kube-scheduler              0                   0491a5f09f3d3       kube-scheduler-old-k8s-version-992336            kube-system
	
	
	==> coredns [a3645c830ae882e91f18ca29697e82834c17cdc1060465378a03c3629aa6ea7f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34935 - 29950 "HINFO IN 5887536420643288492.1286128556610634739. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045699135s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-992336
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-992336
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=old-k8s-version-992336
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T20_53_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 20:53:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-992336
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:55:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:55:29 +0000   Tue, 02 Dec 2025 20:53:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:55:29 +0000   Tue, 02 Dec 2025 20:53:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:55:29 +0000   Tue, 02 Dec 2025 20:53:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:55:29 +0000   Tue, 02 Dec 2025 20:54:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-992336
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                8d62aba3-5101-4346-987f-a9a614755c7a
	  Boot ID:                    9dd5456f-e394-4b4b-9458-48d26faf507e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-5dd5756b68-ptzsf                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     115s
	  kube-system                 etcd-old-k8s-version-992336                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m7s
	  kube-system                 kindnet-jvmsp                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      115s
	  kube-system                 kube-apiserver-old-k8s-version-992336             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-controller-manager-old-k8s-version-992336    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-proxy-qpzt8                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-old-k8s-version-992336             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-jns97        0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-kjcfm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 113s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m7s               kubelet          Node old-k8s-version-992336 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m7s               kubelet          Node old-k8s-version-992336 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m7s               kubelet          Node old-k8s-version-992336 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m7s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           115s               node-controller  Node old-k8s-version-992336 event: Registered Node old-k8s-version-992336 in Controller
	  Normal  NodeReady                101s               kubelet          Node old-k8s-version-992336 status is now: NodeReady
	  Normal  Starting                 61s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node old-k8s-version-992336 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node old-k8s-version-992336 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node old-k8s-version-992336 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                node-controller  Node old-k8s-version-992336 event: Registered Node old-k8s-version-992336 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[ +32.254238] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[Dec 2 20:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 03 bd 14 45 8a 08 06
	[  +0.000590] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 96 27 ad 0d 40 04 08 06
	[Dec 2 20:53] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	[  +0.000700] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 e4 ba c0 78 5f 08 06
	[ +10.119645] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000022] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[  +2.447166] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 df 09 53 d6 6e 08 06
	[  +0.000374] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 8d 06 71 0a 5e 08 06
	[Dec 2 20:54] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 12 47 13 50 f6 bc 08 06
	[  +0.001523] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[ +22.123549] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 0d 45 06 42 2a 08 06
	[  +0.000389] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	
	
	==> etcd [e1e39d0565d3822bf2f251fdb0e8de5f07938ae3aad30710f3eb435ed8294864] <==
	{"level":"info","ts":"2025-12-02T20:54:56.485407Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-02T20:54:56.485924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-12-02T20:54:56.487113Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-12-02T20:54:56.487528Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-02T20:54:56.487622Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-02T20:54:56.489261Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-02T20:54:56.4895Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-02T20:54:56.489532Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-02T20:54:56.489582Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-02T20:54:56.489595Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-02T20:54:57.977893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-02T20:54:57.977966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-02T20:54:57.977995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-02T20:54:57.978011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-12-02T20:54:57.978019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-02T20:54:57.978029Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-12-02T20:54:57.978037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-02T20:54:57.97918Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-02T20:54:57.979201Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-992336 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-02T20:54:57.979207Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-02T20:54:57.97945Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-02T20:54:57.979486Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-02T20:54:57.980638Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-12-02T20:54:57.980645Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-02T20:55:12.271547Z","caller":"traceutil/trace.go:171","msg":"trace[411758583] transaction","detail":"{read_only:false; response_revision:562; number_of_response:1; }","duration":"117.720508ms","start":"2025-12-02T20:55:12.153799Z","end":"2025-12-02T20:55:12.27152Z","steps":["trace[411758583] 'process raft request'  (duration: 105.998906ms)","trace[411758583] 'compare'  (duration: 11.588706ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:55:56 up  2:38,  0 user,  load average: 6.14, 4.37, 2.75
	Linux old-k8s-version-992336 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a8c723adb2c9f209e09041fee9e93fcf992494e43fa7e47890154b25a21288b4] <==
	I1202 20:55:00.451904       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 20:55:00.453063       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1202 20:55:00.453317       1 main.go:148] setting mtu 1500 for CNI 
	I1202 20:55:00.453342       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 20:55:00.453370       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T20:55:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 20:55:00.755262       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 20:55:00.755284       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 20:55:00.755292       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 20:55:00.755395       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 20:55:01.144806       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 20:55:01.144845       1 metrics.go:72] Registering metrics
	I1202 20:55:01.144954       1 controller.go:711] "Syncing nftables rules"
	I1202 20:55:10.756001       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1202 20:55:10.756054       1 main.go:301] handling current node
	I1202 20:55:20.755204       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1202 20:55:20.755305       1 main.go:301] handling current node
	I1202 20:55:30.755150       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1202 20:55:30.755197       1 main.go:301] handling current node
	I1202 20:55:40.755259       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1202 20:55:40.755301       1 main.go:301] handling current node
	I1202 20:55:50.755033       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1202 20:55:50.755123       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b30d0a318021ad78d96505cbec12dab08e463997373813e56adc6e14d585834d] <==
	I1202 20:54:59.320585       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1202 20:54:59.320614       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1202 20:54:59.321099       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1202 20:54:59.322292       1 shared_informer.go:318] Caches are synced for configmaps
	I1202 20:54:59.322685       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1202 20:54:59.325345       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1202 20:54:59.326056       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1202 20:54:59.326800       1 aggregator.go:166] initial CRD sync complete...
	I1202 20:54:59.326889       1 autoregister_controller.go:141] Starting autoregister controller
	I1202 20:54:59.326920       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 20:54:59.326954       1 cache.go:39] Caches are synced for autoregister controller
	E1202 20:54:59.330782       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1202 20:54:59.340402       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1202 20:54:59.388283       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 20:55:00.225921       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 20:55:00.335560       1 controller.go:624] quota admission added evaluator for: namespaces
	I1202 20:55:00.379419       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1202 20:55:00.407510       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 20:55:00.417056       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 20:55:00.426141       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1202 20:55:00.468303       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.133.179"}
	I1202 20:55:00.482482       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.205.123"}
	I1202 20:55:11.556097       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1202 20:55:11.628865       1 controller.go:624] quota admission added evaluator for: endpoints
	I1202 20:55:11.630221       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b1921b3926c4fba551a94a0ec78b54be832b8754401c93ba491ed82e1b71e6be] <==
	I1202 20:55:11.578863       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-jns97"
	I1202 20:55:11.587043       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="26.689307ms"
	I1202 20:55:11.593311       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="32.935112ms"
	I1202 20:55:11.600502       1 shared_informer.go:318] Caches are synced for endpoint
	I1202 20:55:11.605112       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1202 20:55:11.615152       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="18.735054ms"
	I1202 20:55:11.615308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.854µs"
	I1202 20:55:11.647231       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="60.11244ms"
	I1202 20:55:11.647324       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="55.009µs"
	I1202 20:55:11.649272       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.074µs"
	I1202 20:55:11.668888       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1202 20:55:11.722287       1 shared_informer.go:318] Caches are synced for resource quota
	I1202 20:55:11.769693       1 shared_informer.go:318] Caches are synced for resource quota
	I1202 20:55:12.088930       1 shared_informer.go:318] Caches are synced for garbage collector
	I1202 20:55:12.114395       1 shared_informer.go:318] Caches are synced for garbage collector
	I1202 20:55:12.114425       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1202 20:55:17.169289       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="77.399316ms"
	I1202 20:55:17.169428       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="80.48µs"
	I1202 20:55:19.052138       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="114.23µs"
	I1202 20:55:20.056647       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.553µs"
	I1202 20:55:21.059197       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="114.409µs"
	I1202 20:55:34.095602       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="119.642µs"
	I1202 20:55:38.288181       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.727663ms"
	I1202 20:55:38.289588       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="302.428µs"
	I1202 20:55:41.912266       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="110.318µs"
	
	
	==> kube-proxy [02e2a78839ac1c779849b2870b0581ae1fc0576ba27ee665faee95d4690ff516] <==
	I1202 20:55:00.350359       1 server_others.go:69] "Using iptables proxy"
	I1202 20:55:00.360635       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1202 20:55:00.385717       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 20:55:00.388814       1 server_others.go:152] "Using iptables Proxier"
	I1202 20:55:00.388941       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1202 20:55:00.388957       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1202 20:55:00.388990       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1202 20:55:00.389483       1 server.go:846] "Version info" version="v1.28.0"
	I1202 20:55:00.389502       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:55:00.391261       1 config.go:188] "Starting service config controller"
	I1202 20:55:00.391290       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1202 20:55:00.391363       1 config.go:97] "Starting endpoint slice config controller"
	I1202 20:55:00.391496       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1202 20:55:00.393001       1 config.go:315] "Starting node config controller"
	I1202 20:55:00.393408       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1202 20:55:00.491795       1 shared_informer.go:318] Caches are synced for service config
	I1202 20:55:00.491843       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1202 20:55:00.493840       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [670db3462ea1c5beb2d55dfd0859b3df17a3bf33ad117a56693583fcb4ccdd66] <==
	I1202 20:54:57.069967       1 serving.go:348] Generated self-signed cert in-memory
	W1202 20:54:59.287567       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 20:54:59.287607       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W1202 20:54:59.287625       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 20:54:59.287635       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 20:54:59.316279       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1202 20:54:59.316314       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:54:59.318224       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:54:59.318273       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1202 20:54:59.319205       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1202 20:54:59.319473       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1202 20:54:59.418927       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 02 20:55:11 old-k8s-version-992336 kubelet[723]: I1202 20:55:11.733290     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj28g\" (UniqueName: \"kubernetes.io/projected/687204ad-153a-443e-adae-a421f528278a-kube-api-access-kj28g\") pod \"dashboard-metrics-scraper-5f989dc9cf-jns97\" (UID: \"687204ad-153a-443e-adae-a421f528278a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97"
	Dec 02 20:55:11 old-k8s-version-992336 kubelet[723]: I1202 20:55:11.733367     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5a07b7f3-9140-49eb-966b-f8a44aa0fa16-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-kjcfm\" (UID: \"5a07b7f3-9140-49eb-966b-f8a44aa0fa16\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-kjcfm"
	Dec 02 20:55:11 old-k8s-version-992336 kubelet[723]: I1202 20:55:11.733451     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/687204ad-153a-443e-adae-a421f528278a-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-jns97\" (UID: \"687204ad-153a-443e-adae-a421f528278a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97"
	Dec 02 20:55:11 old-k8s-version-992336 kubelet[723]: I1202 20:55:11.733582     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb8vz\" (UniqueName: \"kubernetes.io/projected/5a07b7f3-9140-49eb-966b-f8a44aa0fa16-kube-api-access-pb8vz\") pod \"kubernetes-dashboard-8694d4445c-kjcfm\" (UID: \"5a07b7f3-9140-49eb-966b-f8a44aa0fa16\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-kjcfm"
	Dec 02 20:55:19 old-k8s-version-992336 kubelet[723]: I1202 20:55:19.040026     723 scope.go:117] "RemoveContainer" containerID="c762d3dee3ddbc6677eac7a72488f6df925fbf49ff834d86b05f612d395c131f"
	Dec 02 20:55:19 old-k8s-version-992336 kubelet[723]: I1202 20:55:19.051498     723 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-kjcfm" podStartSLOduration=3.848271595 podCreationTimestamp="2025-12-02 20:55:11 +0000 UTC" firstStartedPulling="2025-12-02 20:55:12.087521621 +0000 UTC m=+16.306566698" lastFinishedPulling="2025-12-02 20:55:16.290684598 +0000 UTC m=+20.509729868" observedRunningTime="2025-12-02 20:55:17.091620059 +0000 UTC m=+21.310665147" watchObservedRunningTime="2025-12-02 20:55:19.051434765 +0000 UTC m=+23.270479856"
	Dec 02 20:55:20 old-k8s-version-992336 kubelet[723]: I1202 20:55:20.044713     723 scope.go:117] "RemoveContainer" containerID="c762d3dee3ddbc6677eac7a72488f6df925fbf49ff834d86b05f612d395c131f"
	Dec 02 20:55:20 old-k8s-version-992336 kubelet[723]: I1202 20:55:20.044950     723 scope.go:117] "RemoveContainer" containerID="7d39d0d64f96064ac67f49d7b291ffc6a723235728102accde7c1367e964cd5e"
	Dec 02 20:55:20 old-k8s-version-992336 kubelet[723]: E1202 20:55:20.045377     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jns97_kubernetes-dashboard(687204ad-153a-443e-adae-a421f528278a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97" podUID="687204ad-153a-443e-adae-a421f528278a"
	Dec 02 20:55:21 old-k8s-version-992336 kubelet[723]: I1202 20:55:21.048623     723 scope.go:117] "RemoveContainer" containerID="7d39d0d64f96064ac67f49d7b291ffc6a723235728102accde7c1367e964cd5e"
	Dec 02 20:55:21 old-k8s-version-992336 kubelet[723]: E1202 20:55:21.048935     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jns97_kubernetes-dashboard(687204ad-153a-443e-adae-a421f528278a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97" podUID="687204ad-153a-443e-adae-a421f528278a"
	Dec 02 20:55:22 old-k8s-version-992336 kubelet[723]: I1202 20:55:22.050554     723 scope.go:117] "RemoveContainer" containerID="7d39d0d64f96064ac67f49d7b291ffc6a723235728102accde7c1367e964cd5e"
	Dec 02 20:55:22 old-k8s-version-992336 kubelet[723]: E1202 20:55:22.050882     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jns97_kubernetes-dashboard(687204ad-153a-443e-adae-a421f528278a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97" podUID="687204ad-153a-443e-adae-a421f528278a"
	Dec 02 20:55:31 old-k8s-version-992336 kubelet[723]: I1202 20:55:31.073448     723 scope.go:117] "RemoveContainer" containerID="e487bf30c0c3633ded0d035f4b5833104a5b2402f66102aa2b3b20b5d8cc9c68"
	Dec 02 20:55:33 old-k8s-version-992336 kubelet[723]: I1202 20:55:33.895257     723 scope.go:117] "RemoveContainer" containerID="7d39d0d64f96064ac67f49d7b291ffc6a723235728102accde7c1367e964cd5e"
	Dec 02 20:55:34 old-k8s-version-992336 kubelet[723]: I1202 20:55:34.084543     723 scope.go:117] "RemoveContainer" containerID="7d39d0d64f96064ac67f49d7b291ffc6a723235728102accde7c1367e964cd5e"
	Dec 02 20:55:34 old-k8s-version-992336 kubelet[723]: I1202 20:55:34.084732     723 scope.go:117] "RemoveContainer" containerID="bf29065d30f2a6e3fbd18c254a02294145f086b26e4171ce8fd09900fd813f1a"
	Dec 02 20:55:34 old-k8s-version-992336 kubelet[723]: E1202 20:55:34.085091     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jns97_kubernetes-dashboard(687204ad-153a-443e-adae-a421f528278a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97" podUID="687204ad-153a-443e-adae-a421f528278a"
	Dec 02 20:55:41 old-k8s-version-992336 kubelet[723]: I1202 20:55:41.901125     723 scope.go:117] "RemoveContainer" containerID="bf29065d30f2a6e3fbd18c254a02294145f086b26e4171ce8fd09900fd813f1a"
	Dec 02 20:55:41 old-k8s-version-992336 kubelet[723]: E1202 20:55:41.901480     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jns97_kubernetes-dashboard(687204ad-153a-443e-adae-a421f528278a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97" podUID="687204ad-153a-443e-adae-a421f528278a"
	Dec 02 20:55:52 old-k8s-version-992336 kubelet[723]: I1202 20:55:52.470572     723 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 02 20:55:52 old-k8s-version-992336 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 20:55:52 old-k8s-version-992336 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 20:55:52 old-k8s-version-992336 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 20:55:52 old-k8s-version-992336 systemd[1]: kubelet.service: Consumed 1.792s CPU time.
	
	
	==> kubernetes-dashboard [c6a55f74f0b2c40c941df4d57b1985d9f197f20a64448ec742c7becad69978f4] <==
	2025/12/02 20:55:16 Using namespace: kubernetes-dashboard
	2025/12/02 20:55:16 Using in-cluster config to connect to apiserver
	2025/12/02 20:55:16 Using secret token for csrf signing
	2025/12/02 20:55:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/02 20:55:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/02 20:55:16 Successful initial request to the apiserver, version: v1.28.0
	2025/12/02 20:55:16 Generating JWE encryption key
	2025/12/02 20:55:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/02 20:55:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/02 20:55:16 Initializing JWE encryption key from synchronized object
	2025/12/02 20:55:16 Creating in-cluster Sidecar client
	2025/12/02 20:55:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 20:55:16 Serving insecurely on HTTP port: 9090
	2025/12/02 20:55:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 20:55:16 Starting overwatch
	
	
	==> storage-provisioner [8679dcbceeeaac9a65dd46a7186f9e2f2fffc82bafe92915a3d128519f8498cd] <==
	I1202 20:55:31.125516       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 20:55:31.135188       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 20:55:31.135314       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1202 20:55:48.538512       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 20:55:48.538843       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"17fffbb9-16db-4d60-9564-e341806dca02", APIVersion:"v1", ResourceVersion:"619", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-992336_833ac1bd-06b6-4279-bf8f-2a470e08bae6 became leader
	I1202 20:55:48.540724       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-992336_833ac1bd-06b6-4279-bf8f-2a470e08bae6!
	I1202 20:55:48.641406       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-992336_833ac1bd-06b6-4279-bf8f-2a470e08bae6!
	
	
	==> storage-provisioner [e487bf30c0c3633ded0d035f4b5833104a5b2402f66102aa2b3b20b5d8cc9c68] <==
	I1202 20:55:00.322825       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1202 20:55:30.327608       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-992336 -n old-k8s-version-992336
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-992336 -n old-k8s-version-992336: exit status 2 (409.444732ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-992336 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-992336
helpers_test.go:243: (dbg) docker inspect old-k8s-version-992336:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "51827f72c809ddc8017131ab6c734b0218cc31e7e1a61761312a7eb2f62a2f62",
	        "Created": "2025-12-02T20:53:31.91066414Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 743874,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T20:54:48.947196331Z",
	            "FinishedAt": "2025-12-02T20:54:47.675219839Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/51827f72c809ddc8017131ab6c734b0218cc31e7e1a61761312a7eb2f62a2f62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/51827f72c809ddc8017131ab6c734b0218cc31e7e1a61761312a7eb2f62a2f62/hostname",
	        "HostsPath": "/var/lib/docker/containers/51827f72c809ddc8017131ab6c734b0218cc31e7e1a61761312a7eb2f62a2f62/hosts",
	        "LogPath": "/var/lib/docker/containers/51827f72c809ddc8017131ab6c734b0218cc31e7e1a61761312a7eb2f62a2f62/51827f72c809ddc8017131ab6c734b0218cc31e7e1a61761312a7eb2f62a2f62-json.log",
	        "Name": "/old-k8s-version-992336",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-992336:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-992336",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "51827f72c809ddc8017131ab6c734b0218cc31e7e1a61761312a7eb2f62a2f62",
	                "LowerDir": "/var/lib/docker/overlay2/7c0073ae68bbddb0c31d7b4a3575e90065e1d78fb046473d890be499fbc620c1-init/diff:/var/lib/docker/overlay2/49bb44e987885c48600d5ae7ad3c81cad82a00f38070c0460882c5746d9fae59/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c0073ae68bbddb0c31d7b4a3575e90065e1d78fb046473d890be499fbc620c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c0073ae68bbddb0c31d7b4a3575e90065e1d78fb046473d890be499fbc620c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c0073ae68bbddb0c31d7b4a3575e90065e1d78fb046473d890be499fbc620c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-992336",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-992336/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-992336",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-992336",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-992336",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a32ddb34a7a900412afdc026fb6d83dfc12bb96840f2a3c82e24de2c0302f42e",
	            "SandboxKey": "/var/run/docker/netns/a32ddb34a7a9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33488"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33489"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33492"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33490"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33491"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-992336": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "65ab470fa0e2676960773427a71fe76968e07b9da2ef303b86ef95d30a18b6c4",
	                    "EndpointID": "2f8873c719fdce5045c92ebeb907a3634506a61e6e30f294e387646770ead96c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "7e:f7:a6:e4:f5:b0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-992336",
	                        "51827f72c809"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-992336 -n old-k8s-version-992336
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-992336 -n old-k8s-version-992336: exit status 2 (355.554226ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-992336 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-992336 logs -n 25: (1.27798856s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ ssh     │ -p bridge-775392 sudo crio config                                                                                                                                                                                                                    │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-992336 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ delete  │ -p bridge-775392                                                                                                                                                                                                                                     │ bridge-775392                │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:54 UTC │
	│ start   │ -p old-k8s-version-992336 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p newest-cni-245604 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable metrics-server -p no-preload-336331 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ stop    │ -p no-preload-336331 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable metrics-server -p newest-cni-245604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-997805 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ stop    │ -p newest-cni-245604 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ stop    │ -p default-k8s-diff-port-997805 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable dashboard -p newest-cni-245604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p newest-cni-245604 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable dashboard -p no-preload-336331 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p no-preload-336331 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ image   │ newest-cni-245604 image list --format=json                                                                                                                                                                                                           │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ pause   │ -p newest-cni-245604 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-997805 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p default-k8s-diff-port-997805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ delete  │ -p newest-cni-245604                                                                                                                                                                                                                                 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ delete  │ -p newest-cni-245604                                                                                                                                                                                                                                 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ delete  │ -p disable-driver-mounts-234978                                                                                                                                                                                                                      │ disable-driver-mounts-234978 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p embed-certs-386191 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ image   │ old-k8s-version-992336 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ pause   │ -p old-k8s-version-992336 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:55:49
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:55:49.973376  761851 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:55:49.973479  761851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:55:49.973486  761851 out.go:374] Setting ErrFile to fd 2...
	I1202 20:55:49.973492  761851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:55:49.973784  761851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:55:49.974402  761851 out.go:368] Setting JSON to false
	I1202 20:55:49.976053  761851 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9494,"bootTime":1764699456,"procs":379,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:55:49.976153  761851 start.go:143] virtualization: kvm guest
	I1202 20:55:49.979903  761851 out.go:179] * [embed-certs-386191] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 20:55:49.981563  761851 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:55:49.981711  761851 notify.go:221] Checking for updates...
	I1202 20:55:49.985961  761851 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:55:49.989444  761851 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:49.990856  761851 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 20:55:49.992198  761851 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:55:49.994165  761851 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:55:49.996734  761851 config.go:182] Loaded profile config "default-k8s-diff-port-997805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:55:49.996944  761851 config.go:182] Loaded profile config "no-preload-336331": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:55:49.997173  761851 config.go:182] Loaded profile config "old-k8s-version-992336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1202 20:55:49.997373  761851 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:55:50.033364  761851 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 20:55:50.033467  761851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:55:50.114622  761851 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 20:55:50.101227741 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:55:50.114779  761851 docker.go:319] overlay module found
	I1202 20:55:50.117537  761851 out.go:179] * Using the docker driver based on user configuration
	I1202 20:55:50.119145  761851 start.go:309] selected driver: docker
	I1202 20:55:50.119167  761851 start.go:927] validating driver "docker" against <nil>
	I1202 20:55:50.119183  761851 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:55:50.120035  761851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:55:50.211212  761851 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 20:55:50.198488456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:55:50.211445  761851 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 20:55:50.211790  761851 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:55:50.214433  761851 out.go:179] * Using Docker driver with root privileges
	I1202 20:55:50.218243  761851 cni.go:84] Creating CNI manager for ""
	I1202 20:55:50.218353  761851 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:50.218375  761851 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 20:55:50.218508  761851 start.go:353] cluster config:
	{Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:55:50.220045  761851 out.go:179] * Starting "embed-certs-386191" primary control-plane node in "embed-certs-386191" cluster
	I1202 20:55:50.221707  761851 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 20:55:50.223105  761851 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 20:55:50.224334  761851 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:50.224383  761851 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 20:55:50.224379  761851 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 20:55:50.224423  761851 cache.go:65] Caching tarball of preloaded images
	I1202 20:55:50.224531  761851 preload.go:238] Found /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 20:55:50.224544  761851 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 20:55:50.224682  761851 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/config.json ...
	I1202 20:55:50.224706  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/config.json: {Name:mk4df57c1427e88de36c6d265cf4b7b9447ba4a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:50.254982  761851 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 20:55:50.255008  761851 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 20:55:50.255030  761851 cache.go:243] Successfully downloaded all kic artifacts
	I1202 20:55:50.255092  761851 start.go:360] acquireMachinesLock for embed-certs-386191: {Name:mk07b451c8d7193712ed79603183bf03b141f2ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:55:50.255209  761851 start.go:364] duration metric: took 90.207µs to acquireMachinesLock for "embed-certs-386191"
	I1202 20:55:50.255244  761851 start.go:93] Provisioning new machine with config: &{Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:55:50.255372  761851 start.go:125] createHost starting for "" (driver="docker")
	W1202 20:55:47.478474  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:55:49.480219  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:55:48.658867  759377 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:55:48.658893  759377 machine.go:97] duration metric: took 4.363922202s to provisionDockerMachine
	I1202 20:55:48.658908  759377 start.go:293] postStartSetup for "default-k8s-diff-port-997805" (driver="docker")
	I1202 20:55:48.659934  759377 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:55:48.660266  759377 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:55:48.660319  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:48.684270  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:48.800470  759377 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:55:48.806594  759377 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:55:48.806641  759377 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:55:48.806659  759377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 20:55:48.806723  759377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 20:55:48.806832  759377 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem -> 4110322.pem in /etc/ssl/certs
	I1202 20:55:48.807095  759377 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:55:48.817526  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:55:48.843728  759377 start.go:296] duration metric: took 183.799228ms for postStartSetup
	I1202 20:55:48.843844  759377 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:55:48.843886  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:48.867562  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:48.976679  759377 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:55:48.983737  759377 fix.go:56] duration metric: took 5.130755935s for fixHost
	I1202 20:55:48.983779  759377 start.go:83] releasing machines lock for "default-k8s-diff-port-997805", held for 5.130814844s
	I1202 20:55:48.983853  759377 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-997805
	I1202 20:55:49.008951  759377 ssh_runner.go:195] Run: cat /version.json
	I1202 20:55:49.009046  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:49.009048  759377 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:55:49.009136  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:49.034693  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:49.035313  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:49.217584  759377 ssh_runner.go:195] Run: systemctl --version
	I1202 20:55:49.226948  759377 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:55:49.280525  759377 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:55:49.287579  759377 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:55:49.287663  759377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:55:49.299593  759377 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 20:55:49.299624  759377 start.go:496] detecting cgroup driver to use...
	I1202 20:55:49.299667  759377 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 20:55:49.299717  759377 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:55:49.321346  759377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:55:49.340202  759377 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:55:49.340276  759377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:55:49.364580  759377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:55:49.384570  759377 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:55:49.507838  759377 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:55:49.636982  759377 docker.go:234] disabling docker service ...
	I1202 20:55:49.637124  759377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:55:49.660429  759377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:55:49.676580  759377 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:55:49.805919  759377 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:55:49.932552  759377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:55:49.950808  759377 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:55:49.973269  759377 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:55:49.973378  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:49.987382  759377 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 20:55:49.987446  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.001518  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.015622  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.029383  759377 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:55:50.042396  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.055622  759377 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.069706  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.082027  759377 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:55:50.093878  759377 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:55:50.106172  759377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:50.241651  759377 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:55:51.093615  759377 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:55:51.093712  759377 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:55:51.098803  759377 start.go:564] Will wait 60s for crictl version
	I1202 20:55:51.098893  759377 ssh_runner.go:195] Run: which crictl
	I1202 20:55:51.103616  759377 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:55:51.134275  759377 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:55:51.134365  759377 ssh_runner.go:195] Run: crio --version
	I1202 20:55:51.176508  759377 ssh_runner.go:195] Run: crio --version
	I1202 20:55:51.212619  759377 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 20:55:51.213954  759377 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-997805 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:55:51.239456  759377 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 20:55:51.247008  759377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:51.258836  759377 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-997805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-997805 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:55:51.259035  759377 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:51.259113  759377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:51.305184  759377 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:51.305211  759377 crio.go:433] Images already preloaded, skipping extraction
	I1202 20:55:51.305279  759377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:51.336679  759377 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:51.336721  759377 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:55:51.336736  759377 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1202 20:55:51.336850  759377 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-997805 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-997805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:55:51.336915  759377 ssh_runner.go:195] Run: crio config
	I1202 20:55:51.395485  759377 cni.go:84] Creating CNI manager for ""
	I1202 20:55:51.395526  759377 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:51.395553  759377 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:55:51.395590  759377 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-997805 NodeName:default-k8s-diff-port-997805 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:55:51.395786  759377 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-997805"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:55:51.395870  759377 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 20:55:51.406735  759377 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:55:51.406822  759377 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:55:51.416228  759377 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1202 20:55:51.430748  759377 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:55:51.448244  759377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1202 20:55:51.463482  759377 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 20:55:51.467906  759377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:51.480393  759377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:51.588830  759377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:55:51.618253  759377 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805 for IP: 192.168.85.2
	I1202 20:55:51.618282  759377 certs.go:195] generating shared ca certs ...
	I1202 20:55:51.618303  759377 certs.go:227] acquiring lock for ca certs: {Name:mkea11924193dd4dc459e16d70207bde383196df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:51.618470  759377 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key
	I1202 20:55:51.618534  759377 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key
	I1202 20:55:51.618547  759377 certs.go:257] generating profile certs ...
	I1202 20:55:51.618661  759377 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/client.key
	I1202 20:55:51.618759  759377 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/apiserver.key.36ffc693
	I1202 20:55:51.618817  759377 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/proxy-client.key
	I1202 20:55:51.618958  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem (1338 bytes)
	W1202 20:55:51.619000  759377 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032_empty.pem, impossibly tiny 0 bytes
	I1202 20:55:51.619010  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 20:55:51.619043  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:55:51.619087  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:55:51.619120  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem (1675 bytes)
	I1202 20:55:51.619173  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:55:51.619958  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:55:51.642775  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:55:51.668086  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:55:51.695111  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:55:51.723055  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 20:55:51.757108  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 20:55:51.782582  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:55:51.803028  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 20:55:51.823897  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:55:51.845621  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem --> /usr/share/ca-certificates/411032.pem (1338 bytes)
	I1202 20:55:51.866855  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /usr/share/ca-certificates/4110322.pem (1708 bytes)
	I1202 20:55:51.890515  759377 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:55:51.906355  759377 ssh_runner.go:195] Run: openssl version
	I1202 20:55:51.914259  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110322.pem && ln -fs /usr/share/ca-certificates/4110322.pem /etc/ssl/certs/4110322.pem"
	I1202 20:55:51.925148  759377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110322.pem
	I1202 20:55:51.929800  759377 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 20:13 /usr/share/ca-certificates/4110322.pem
	I1202 20:55:51.929869  759377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110322.pem
	I1202 20:55:51.972279  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4110322.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:55:51.983418  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:55:51.993784  759377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:51.999249  759377 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:51.999316  759377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:52.049373  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:55:52.061515  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/411032.pem && ln -fs /usr/share/ca-certificates/411032.pem /etc/ssl/certs/411032.pem"
	I1202 20:55:52.072126  759377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/411032.pem
	I1202 20:55:52.076862  759377 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 20:13 /usr/share/ca-certificates/411032.pem
	I1202 20:55:52.076956  759377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/411032.pem
	I1202 20:55:52.126642  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/411032.pem /etc/ssl/certs/51391683.0"
	I1202 20:55:52.138458  759377 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:55:52.143543  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 20:55:52.198225  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 20:55:52.254754  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 20:55:52.319722  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 20:55:52.380903  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 20:55:52.422910  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 20:55:52.483325  759377 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-997805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-997805 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:55:52.483438  759377 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:55:52.483499  759377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:55:52.522620  759377 cri.go:89] found id: "25e14e8feafb6c0d6c5261cd5e507b812e39fcb9c7e196408fe69d780ebbcd1d"
	I1202 20:55:52.522651  759377 cri.go:89] found id: "0c7e2844e2dbdbf5b9ffe8bf4e8d07304b64b059e3d4c965c2010c5d8a39c499"
	I1202 20:55:52.522657  759377 cri.go:89] found id: "81b0ec87511a05a7501d98eb27c52f69372a4b30c4ea523db262c140f9b68cd3"
	I1202 20:55:52.522662  759377 cri.go:89] found id: "e13e6c4d6c5da602ac2e1402a7612205c5a0ceffdccf7618da3035e562a7d9d3"
	I1202 20:55:52.522667  759377 cri.go:89] found id: ""
	I1202 20:55:52.522718  759377 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 20:55:52.539274  759377 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:55:52Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:55:52.539358  759377 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:55:52.550759  759377 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 20:55:52.550911  759377 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 20:55:52.550977  759377 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 20:55:52.562444  759377 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:55:52.563380  759377 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-997805" does not appear in /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:52.563867  759377 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-407427/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-997805" cluster setting kubeconfig missing "default-k8s-diff-port-997805" context setting]
	I1202 20:55:52.564708  759377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:52.567122  759377 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 20:55:52.580423  759377 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1202 20:55:52.580475  759377 kubeadm.go:602] duration metric: took 29.545337ms to restartPrimaryControlPlane
	I1202 20:55:52.580492  759377 kubeadm.go:403] duration metric: took 97.179033ms to StartCluster
	I1202 20:55:52.580515  759377 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:52.580624  759377 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:52.582395  759377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:52.582737  759377 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:55:52.582982  759377 config.go:182] Loaded profile config "default-k8s-diff-port-997805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:55:52.583044  759377 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:55:52.583145  759377 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:52.583167  759377 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-997805"
	W1202 20:55:52.583180  759377 addons.go:248] addon storage-provisioner should already be in state true
	I1202 20:55:52.583208  759377 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:52.583706  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.583924  759377 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:52.583949  759377 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-997805"
	W1202 20:55:52.583958  759377 addons.go:248] addon dashboard should already be in state true
	I1202 20:55:52.583987  759377 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:52.584470  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.584621  759377 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:52.584638  759377 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-997805"
	I1202 20:55:52.584909  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.590138  759377 out.go:179] * Verifying Kubernetes components...
	I1202 20:55:52.591985  759377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:52.621520  759377 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-997805"
	W1202 20:55:52.621550  759377 addons.go:248] addon default-storageclass should already be in state true
	I1202 20:55:52.621581  759377 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:52.621962  759377 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 20:55:52.621973  759377 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:55:52.622100  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.623522  759377 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:52.623542  759377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:55:52.623861  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:52.629794  759377 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 20:55:52.631326  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 20:55:52.631354  759377 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 20:55:52.631441  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:52.650454  759377 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:52.650440  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:52.650477  759377 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:55:52.650539  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:52.664697  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:52.687593  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:52.782783  759377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:55:52.788136  759377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:52.796186  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 20:55:52.796227  759377 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 20:55:52.805245  759377 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-997805" to be "Ready" ...
	I1202 20:55:52.813493  759377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:52.816061  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 20:55:52.816120  759377 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 20:55:52.836609  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 20:55:52.836641  759377 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 20:55:52.858664  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 20:55:52.858695  759377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 20:55:52.881817  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 20:55:52.881850  759377 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 20:55:52.898249  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 20:55:52.898282  759377 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 20:55:52.916317  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 20:55:52.916341  759377 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 20:55:52.934311  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 20:55:52.934421  759377 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 20:55:52.954130  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:55:52.954156  759377 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 20:55:52.971994  759377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:55:50.259730  761851 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1202 20:55:50.260957  761851 start.go:159] libmachine.API.Create for "embed-certs-386191" (driver="docker")
	I1202 20:55:50.261018  761851 client.go:173] LocalClient.Create starting
	I1202 20:55:50.261131  761851 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem
	I1202 20:55:50.261175  761851 main.go:143] libmachine: Decoding PEM data...
	I1202 20:55:50.261199  761851 main.go:143] libmachine: Parsing certificate...
	I1202 20:55:50.261293  761851 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem
	I1202 20:55:50.261321  761851 main.go:143] libmachine: Decoding PEM data...
	I1202 20:55:50.261336  761851 main.go:143] libmachine: Parsing certificate...
	I1202 20:55:50.261828  761851 cli_runner.go:164] Run: docker network inspect embed-certs-386191 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 20:55:50.287353  761851 cli_runner.go:211] docker network inspect embed-certs-386191 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 20:55:50.287436  761851 network_create.go:284] running [docker network inspect embed-certs-386191] to gather additional debugging logs...
	I1202 20:55:50.287467  761851 cli_runner.go:164] Run: docker network inspect embed-certs-386191
	W1202 20:55:50.313420  761851 cli_runner.go:211] docker network inspect embed-certs-386191 returned with exit code 1
	I1202 20:55:50.313458  761851 network_create.go:287] error running [docker network inspect embed-certs-386191]: docker network inspect embed-certs-386191: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-386191 not found
	I1202 20:55:50.313493  761851 network_create.go:289] output of [docker network inspect embed-certs-386191]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-386191 not found
	
	** /stderr **
	I1202 20:55:50.313695  761851 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:55:50.339597  761851 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-acf081edf266 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:04:c0:60:47:62} reservation:<nil>}
	I1202 20:55:50.340759  761851 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9623a21fb225 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:fc:8b:40:15:1b} reservation:<nil>}
	I1202 20:55:50.341559  761851 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f2b79e7e26a5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:c7:f4:38:1c:32} reservation:<nil>}
	I1202 20:55:50.342581  761851 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-be4fb772701b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:87:5f:38:96:b7} reservation:<nil>}
	I1202 20:55:50.343861  761851 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-13fe483902b9 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:a2:a4:21:b2:62:5a} reservation:<nil>}
	I1202 20:55:50.344785  761851 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-65ab470fa0e2 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:16:23:28:7c:c5:24} reservation:<nil>}
	I1202 20:55:50.346012  761851 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fb66a0}
	I1202 20:55:50.346044  761851 network_create.go:124] attempt to create docker network embed-certs-386191 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1202 20:55:50.346142  761851 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-386191 embed-certs-386191
	I1202 20:55:50.449757  761851 network_create.go:108] docker network embed-certs-386191 192.168.103.0/24 created
	I1202 20:55:50.449812  761851 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-386191" container
	I1202 20:55:50.449912  761851 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 20:55:50.476319  761851 cli_runner.go:164] Run: docker volume create embed-certs-386191 --label name.minikube.sigs.k8s.io=embed-certs-386191 --label created_by.minikube.sigs.k8s.io=true
	I1202 20:55:50.544287  761851 oci.go:103] Successfully created a docker volume embed-certs-386191
	I1202 20:55:50.544384  761851 cli_runner.go:164] Run: docker run --rm --name embed-certs-386191-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-386191 --entrypoint /usr/bin/test -v embed-certs-386191:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 20:55:51.390297  761851 oci.go:107] Successfully prepared a docker volume embed-certs-386191
	I1202 20:55:51.390398  761851 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:51.390416  761851 kic.go:194] Starting extracting preloaded images to volume ...
	I1202 20:55:51.390490  761851 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-386191:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	W1202 20:55:51.979014  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:55:54.048006  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:55:54.222552  759377 node_ready.go:49] node "default-k8s-diff-port-997805" is "Ready"
	I1202 20:55:54.222597  759377 node_ready.go:38] duration metric: took 1.417304277s for node "default-k8s-diff-port-997805" to be "Ready" ...
	I1202 20:55:54.222616  759377 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:55:54.222680  759377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:55:55.521273  759377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.733090646s)
	I1202 20:55:55.521348  759377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.707827699s)
	I1202 20:55:55.956240  759377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.984189677s)
	I1202 20:55:55.956260  759377 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.733551247s)
	I1202 20:55:55.956296  759377 api_server.go:72] duration metric: took 3.373517458s to wait for apiserver process to appear ...
	I1202 20:55:55.956305  759377 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:55:55.956329  759377 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1202 20:55:55.957591  759377 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-997805 addons enable metrics-server
	
	I1202 20:55:55.960080  759377 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	
	
	==> CRI-O <==
	Dec 02 20:55:19 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:19.086993729Z" level=info msg="Started container" PID=1729 containerID=7d39d0d64f96064ac67f49d7b291ffc6a723235728102accde7c1367e964cd5e description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97/dashboard-metrics-scraper id=dfa3ce6d-a0b3-4f3a-92ac-738e1a066bb7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=24705c97e6c39f3ff4d1d03505e9797f29fdafe19cf88d85eeb62f7fb58e596d
	Dec 02 20:55:20 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:20.046480414Z" level=info msg="Removing container: c762d3dee3ddbc6677eac7a72488f6df925fbf49ff834d86b05f612d395c131f" id=83795fcf-7033-4a1e-a226-43425c890dfc name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 20:55:20 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:20.06019327Z" level=info msg="Removed container c762d3dee3ddbc6677eac7a72488f6df925fbf49ff834d86b05f612d395c131f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97/dashboard-metrics-scraper" id=83795fcf-7033-4a1e-a226-43425c890dfc name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 20:55:31 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:31.073959063Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f658b202-219e-4577-b41a-0c0041256cf2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:31 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:31.075107174Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=531641bd-b5ab-479e-beab-9d7fd652f0f9 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:31 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:31.076194618Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=230a7d83-5dcb-4ac9-a680-c52bd33c3cba name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:55:31 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:31.076318733Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:31 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:31.0805507Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:31 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:31.080697597Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/bf0e32b250acb23a0a2d3be066c74c8a8b6d614fe7ef4bf25bc12e1935332df0/merged/etc/passwd: no such file or directory"
	Dec 02 20:55:31 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:31.08071913Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/bf0e32b250acb23a0a2d3be066c74c8a8b6d614fe7ef4bf25bc12e1935332df0/merged/etc/group: no such file or directory"
	Dec 02 20:55:31 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:31.0809277Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:31 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:31.10723906Z" level=info msg="Created container 8679dcbceeeaac9a65dd46a7186f9e2f2fffc82bafe92915a3d128519f8498cd: kube-system/storage-provisioner/storage-provisioner" id=230a7d83-5dcb-4ac9-a680-c52bd33c3cba name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:55:31 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:31.107909095Z" level=info msg="Starting container: 8679dcbceeeaac9a65dd46a7186f9e2f2fffc82bafe92915a3d128519f8498cd" id=af9f671b-a609-48d4-8325-49ba54ceb753 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:55:31 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:31.109792084Z" level=info msg="Started container" PID=1743 containerID=8679dcbceeeaac9a65dd46a7186f9e2f2fffc82bafe92915a3d128519f8498cd description=kube-system/storage-provisioner/storage-provisioner id=af9f671b-a609-48d4-8325-49ba54ceb753 name=/runtime.v1.RuntimeService/StartContainer sandboxID=861e16b68f1a92ea9f025a3342a4d215364187c0293ce4f9c2f00c075b15b465
	Dec 02 20:55:33 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:33.895913271Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=dd417e9e-c822-4298-afd0-3bb980bc9fe7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:33 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:33.896897842Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=64706d65-529c-4898-b37b-778ac915c582 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:55:33 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:33.897942313Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97/dashboard-metrics-scraper" id=62e78e8b-25d7-469b-b594-2b8805c77aaa name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:55:33 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:33.898149573Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:33 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:33.903941275Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:33 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:33.904443849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:55:33 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:33.933280446Z" level=info msg="Created container bf29065d30f2a6e3fbd18c254a02294145f086b26e4171ce8fd09900fd813f1a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97/dashboard-metrics-scraper" id=62e78e8b-25d7-469b-b594-2b8805c77aaa name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:55:33 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:33.934006401Z" level=info msg="Starting container: bf29065d30f2a6e3fbd18c254a02294145f086b26e4171ce8fd09900fd813f1a" id=58f51271-c0f2-4226-83a6-c60dd5eb3c1e name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:55:33 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:33.9359624Z" level=info msg="Started container" PID=1759 containerID=bf29065d30f2a6e3fbd18c254a02294145f086b26e4171ce8fd09900fd813f1a description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97/dashboard-metrics-scraper id=58f51271-c0f2-4226-83a6-c60dd5eb3c1e name=/runtime.v1.RuntimeService/StartContainer sandboxID=24705c97e6c39f3ff4d1d03505e9797f29fdafe19cf88d85eeb62f7fb58e596d
	Dec 02 20:55:34 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:34.085786373Z" level=info msg="Removing container: 7d39d0d64f96064ac67f49d7b291ffc6a723235728102accde7c1367e964cd5e" id=7ded94f2-bd1a-405a-bd4e-877d7197a587 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 20:55:34 old-k8s-version-992336 crio[567]: time="2025-12-02T20:55:34.095465194Z" level=info msg="Removed container 7d39d0d64f96064ac67f49d7b291ffc6a723235728102accde7c1367e964cd5e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97/dashboard-metrics-scraper" id=7ded94f2-bd1a-405a-bd4e-877d7197a587 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	bf29065d30f2a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago       Exited              dashboard-metrics-scraper   2                   24705c97e6c39       dashboard-metrics-scraper-5f989dc9cf-jns97       kubernetes-dashboard
	8679dcbceeeaa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           27 seconds ago       Running             storage-provisioner         1                   861e16b68f1a9       storage-provisioner                              kube-system
	c6a55f74f0b2c       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago       Running             kubernetes-dashboard        0                   d23e0cd2b02ca       kubernetes-dashboard-8694d4445c-kjcfm            kubernetes-dashboard
	a3645c830ae88       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           57 seconds ago       Running             coredns                     0                   18f5d9221f542       coredns-5dd5756b68-ptzsf                         kube-system
	e55eee3e8fc34       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   a855f6a46899f       busybox                                          default
	e487bf30c0c36       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   861e16b68f1a9       storage-provisioner                              kube-system
	02e2a78839ac1       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           57 seconds ago       Running             kube-proxy                  0                   3db0ce9c23af4       kube-proxy-qpzt8                                 kube-system
	a8c723adb2c9f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           57 seconds ago       Running             kindnet-cni                 0                   f8b2468643e5a       kindnet-jvmsp                                    kube-system
	b1921b3926c4f       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           About a minute ago   Running             kube-controller-manager     0                   b1ab5cad3e79d       kube-controller-manager-old-k8s-version-992336   kube-system
	e1e39d0565d38       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           About a minute ago   Running             etcd                        0                   e33d85da64f19       etcd-old-k8s-version-992336                      kube-system
	b30d0a318021a       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           About a minute ago   Running             kube-apiserver              0                   883565e7c9c58       kube-apiserver-old-k8s-version-992336            kube-system
	670db3462ea1c       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           About a minute ago   Running             kube-scheduler              0                   0491a5f09f3d3       kube-scheduler-old-k8s-version-992336            kube-system
	
	
	==> coredns [a3645c830ae882e91f18ca29697e82834c17cdc1060465378a03c3629aa6ea7f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34935 - 29950 "HINFO IN 5887536420643288492.1286128556610634739. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045699135s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-992336
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-992336
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=old-k8s-version-992336
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T20_53_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 20:53:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-992336
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:55:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:55:29 +0000   Tue, 02 Dec 2025 20:53:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:55:29 +0000   Tue, 02 Dec 2025 20:53:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:55:29 +0000   Tue, 02 Dec 2025 20:53:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:55:29 +0000   Tue, 02 Dec 2025 20:54:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-992336
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                8d62aba3-5101-4346-987f-a9a614755c7a
	  Boot ID:                    9dd5456f-e394-4b4b-9458-48d26faf507e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 coredns-5dd5756b68-ptzsf                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     117s
	  kube-system                 etcd-old-k8s-version-992336                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m9s
	  kube-system                 kindnet-jvmsp                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      117s
	  kube-system                 kube-apiserver-old-k8s-version-992336             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-controller-manager-old-k8s-version-992336    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-proxy-qpzt8                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-scheduler-old-k8s-version-992336             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-jns97        0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-kjcfm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 116s               kube-proxy       
	  Normal  Starting                 57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m9s               kubelet          Node old-k8s-version-992336 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s               kubelet          Node old-k8s-version-992336 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s               kubelet          Node old-k8s-version-992336 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m9s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           117s               node-controller  Node old-k8s-version-992336 event: Registered Node old-k8s-version-992336 in Controller
	  Normal  NodeReady                103s               kubelet          Node old-k8s-version-992336 status is now: NodeReady
	  Normal  Starting                 63s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s (x8 over 63s)  kubelet          Node old-k8s-version-992336 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 63s)  kubelet          Node old-k8s-version-992336 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x8 over 63s)  kubelet          Node old-k8s-version-992336 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                node-controller  Node old-k8s-version-992336 event: Registered Node old-k8s-version-992336 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[ +32.254238] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[Dec 2 20:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 03 bd 14 45 8a 08 06
	[  +0.000590] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 96 27 ad 0d 40 04 08 06
	[Dec 2 20:53] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	[  +0.000700] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 e4 ba c0 78 5f 08 06
	[ +10.119645] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000022] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[  +2.447166] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 df 09 53 d6 6e 08 06
	[  +0.000374] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 8d 06 71 0a 5e 08 06
	[Dec 2 20:54] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 12 47 13 50 f6 bc 08 06
	[  +0.001523] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[ +22.123549] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 0d 45 06 42 2a 08 06
	[  +0.000389] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	
	
	==> etcd [e1e39d0565d3822bf2f251fdb0e8de5f07938ae3aad30710f3eb435ed8294864] <==
	{"level":"info","ts":"2025-12-02T20:54:56.485407Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-02T20:54:56.485924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-12-02T20:54:56.487113Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-12-02T20:54:56.487528Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-02T20:54:56.487622Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-02T20:54:56.489261Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-02T20:54:56.4895Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-02T20:54:56.489532Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-02T20:54:56.489582Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-02T20:54:56.489595Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-02T20:54:57.977893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-02T20:54:57.977966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-02T20:54:57.977995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-02T20:54:57.978011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-12-02T20:54:57.978019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-02T20:54:57.978029Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-12-02T20:54:57.978037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-02T20:54:57.97918Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-02T20:54:57.979201Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-992336 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-02T20:54:57.979207Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-02T20:54:57.97945Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-02T20:54:57.979486Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-02T20:54:57.980638Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-12-02T20:54:57.980645Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-02T20:55:12.271547Z","caller":"traceutil/trace.go:171","msg":"trace[411758583] transaction","detail":"{read_only:false; response_revision:562; number_of_response:1; }","duration":"117.720508ms","start":"2025-12-02T20:55:12.153799Z","end":"2025-12-02T20:55:12.27152Z","steps":["trace[411758583] 'process raft request'  (duration: 105.998906ms)","trace[411758583] 'compare'  (duration: 11.588706ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:55:58 up  2:38,  0 user,  load average: 6.14, 4.37, 2.75
	Linux old-k8s-version-992336 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a8c723adb2c9f209e09041fee9e93fcf992494e43fa7e47890154b25a21288b4] <==
	I1202 20:55:00.451904       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 20:55:00.453063       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1202 20:55:00.453317       1 main.go:148] setting mtu 1500 for CNI 
	I1202 20:55:00.453342       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 20:55:00.453370       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T20:55:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 20:55:00.755262       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 20:55:00.755284       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 20:55:00.755292       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 20:55:00.755395       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 20:55:01.144806       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 20:55:01.144845       1 metrics.go:72] Registering metrics
	I1202 20:55:01.144954       1 controller.go:711] "Syncing nftables rules"
	I1202 20:55:10.756001       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1202 20:55:10.756054       1 main.go:301] handling current node
	I1202 20:55:20.755204       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1202 20:55:20.755305       1 main.go:301] handling current node
	I1202 20:55:30.755150       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1202 20:55:30.755197       1 main.go:301] handling current node
	I1202 20:55:40.755259       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1202 20:55:40.755301       1 main.go:301] handling current node
	I1202 20:55:50.755033       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1202 20:55:50.755123       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b30d0a318021ad78d96505cbec12dab08e463997373813e56adc6e14d585834d] <==
	I1202 20:54:59.320585       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1202 20:54:59.320614       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1202 20:54:59.321099       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1202 20:54:59.322292       1 shared_informer.go:318] Caches are synced for configmaps
	I1202 20:54:59.322685       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1202 20:54:59.325345       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1202 20:54:59.326056       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1202 20:54:59.326800       1 aggregator.go:166] initial CRD sync complete...
	I1202 20:54:59.326889       1 autoregister_controller.go:141] Starting autoregister controller
	I1202 20:54:59.326920       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 20:54:59.326954       1 cache.go:39] Caches are synced for autoregister controller
	E1202 20:54:59.330782       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1202 20:54:59.340402       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1202 20:54:59.388283       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 20:55:00.225921       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 20:55:00.335560       1 controller.go:624] quota admission added evaluator for: namespaces
	I1202 20:55:00.379419       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1202 20:55:00.407510       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 20:55:00.417056       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 20:55:00.426141       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1202 20:55:00.468303       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.133.179"}
	I1202 20:55:00.482482       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.205.123"}
	I1202 20:55:11.556097       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1202 20:55:11.628865       1 controller.go:624] quota admission added evaluator for: endpoints
	I1202 20:55:11.630221       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b1921b3926c4fba551a94a0ec78b54be832b8754401c93ba491ed82e1b71e6be] <==
	I1202 20:55:11.578863       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-jns97"
	I1202 20:55:11.587043       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="26.689307ms"
	I1202 20:55:11.593311       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="32.935112ms"
	I1202 20:55:11.600502       1 shared_informer.go:318] Caches are synced for endpoint
	I1202 20:55:11.605112       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1202 20:55:11.615152       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="18.735054ms"
	I1202 20:55:11.615308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.854µs"
	I1202 20:55:11.647231       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="60.11244ms"
	I1202 20:55:11.647324       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="55.009µs"
	I1202 20:55:11.649272       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.074µs"
	I1202 20:55:11.668888       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1202 20:55:11.722287       1 shared_informer.go:318] Caches are synced for resource quota
	I1202 20:55:11.769693       1 shared_informer.go:318] Caches are synced for resource quota
	I1202 20:55:12.088930       1 shared_informer.go:318] Caches are synced for garbage collector
	I1202 20:55:12.114395       1 shared_informer.go:318] Caches are synced for garbage collector
	I1202 20:55:12.114425       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1202 20:55:17.169289       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="77.399316ms"
	I1202 20:55:17.169428       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="80.48µs"
	I1202 20:55:19.052138       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="114.23µs"
	I1202 20:55:20.056647       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.553µs"
	I1202 20:55:21.059197       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="114.409µs"
	I1202 20:55:34.095602       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="119.642µs"
	I1202 20:55:38.288181       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.727663ms"
	I1202 20:55:38.289588       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="302.428µs"
	I1202 20:55:41.912266       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="110.318µs"
	
	
	==> kube-proxy [02e2a78839ac1c779849b2870b0581ae1fc0576ba27ee665faee95d4690ff516] <==
	I1202 20:55:00.350359       1 server_others.go:69] "Using iptables proxy"
	I1202 20:55:00.360635       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1202 20:55:00.385717       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 20:55:00.388814       1 server_others.go:152] "Using iptables Proxier"
	I1202 20:55:00.388941       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1202 20:55:00.388957       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1202 20:55:00.388990       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1202 20:55:00.389483       1 server.go:846] "Version info" version="v1.28.0"
	I1202 20:55:00.389502       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:55:00.391261       1 config.go:188] "Starting service config controller"
	I1202 20:55:00.391290       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1202 20:55:00.391363       1 config.go:97] "Starting endpoint slice config controller"
	I1202 20:55:00.391496       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1202 20:55:00.393001       1 config.go:315] "Starting node config controller"
	I1202 20:55:00.393408       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1202 20:55:00.491795       1 shared_informer.go:318] Caches are synced for service config
	I1202 20:55:00.491843       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1202 20:55:00.493840       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [670db3462ea1c5beb2d55dfd0859b3df17a3bf33ad117a56693583fcb4ccdd66] <==
	I1202 20:54:57.069967       1 serving.go:348] Generated self-signed cert in-memory
	W1202 20:54:59.287567       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 20:54:59.287607       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W1202 20:54:59.287625       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 20:54:59.287635       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 20:54:59.316279       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1202 20:54:59.316314       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:54:59.318224       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:54:59.318273       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1202 20:54:59.319205       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1202 20:54:59.319473       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1202 20:54:59.418927       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 02 20:55:11 old-k8s-version-992336 kubelet[723]: I1202 20:55:11.733290     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj28g\" (UniqueName: \"kubernetes.io/projected/687204ad-153a-443e-adae-a421f528278a-kube-api-access-kj28g\") pod \"dashboard-metrics-scraper-5f989dc9cf-jns97\" (UID: \"687204ad-153a-443e-adae-a421f528278a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97"
	Dec 02 20:55:11 old-k8s-version-992336 kubelet[723]: I1202 20:55:11.733367     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5a07b7f3-9140-49eb-966b-f8a44aa0fa16-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-kjcfm\" (UID: \"5a07b7f3-9140-49eb-966b-f8a44aa0fa16\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-kjcfm"
	Dec 02 20:55:11 old-k8s-version-992336 kubelet[723]: I1202 20:55:11.733451     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/687204ad-153a-443e-adae-a421f528278a-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-jns97\" (UID: \"687204ad-153a-443e-adae-a421f528278a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97"
	Dec 02 20:55:11 old-k8s-version-992336 kubelet[723]: I1202 20:55:11.733582     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb8vz\" (UniqueName: \"kubernetes.io/projected/5a07b7f3-9140-49eb-966b-f8a44aa0fa16-kube-api-access-pb8vz\") pod \"kubernetes-dashboard-8694d4445c-kjcfm\" (UID: \"5a07b7f3-9140-49eb-966b-f8a44aa0fa16\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-kjcfm"
	Dec 02 20:55:19 old-k8s-version-992336 kubelet[723]: I1202 20:55:19.040026     723 scope.go:117] "RemoveContainer" containerID="c762d3dee3ddbc6677eac7a72488f6df925fbf49ff834d86b05f612d395c131f"
	Dec 02 20:55:19 old-k8s-version-992336 kubelet[723]: I1202 20:55:19.051498     723 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-kjcfm" podStartSLOduration=3.848271595 podCreationTimestamp="2025-12-02 20:55:11 +0000 UTC" firstStartedPulling="2025-12-02 20:55:12.087521621 +0000 UTC m=+16.306566698" lastFinishedPulling="2025-12-02 20:55:16.290684598 +0000 UTC m=+20.509729868" observedRunningTime="2025-12-02 20:55:17.091620059 +0000 UTC m=+21.310665147" watchObservedRunningTime="2025-12-02 20:55:19.051434765 +0000 UTC m=+23.270479856"
	Dec 02 20:55:20 old-k8s-version-992336 kubelet[723]: I1202 20:55:20.044713     723 scope.go:117] "RemoveContainer" containerID="c762d3dee3ddbc6677eac7a72488f6df925fbf49ff834d86b05f612d395c131f"
	Dec 02 20:55:20 old-k8s-version-992336 kubelet[723]: I1202 20:55:20.044950     723 scope.go:117] "RemoveContainer" containerID="7d39d0d64f96064ac67f49d7b291ffc6a723235728102accde7c1367e964cd5e"
	Dec 02 20:55:20 old-k8s-version-992336 kubelet[723]: E1202 20:55:20.045377     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jns97_kubernetes-dashboard(687204ad-153a-443e-adae-a421f528278a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97" podUID="687204ad-153a-443e-adae-a421f528278a"
	Dec 02 20:55:21 old-k8s-version-992336 kubelet[723]: I1202 20:55:21.048623     723 scope.go:117] "RemoveContainer" containerID="7d39d0d64f96064ac67f49d7b291ffc6a723235728102accde7c1367e964cd5e"
	Dec 02 20:55:21 old-k8s-version-992336 kubelet[723]: E1202 20:55:21.048935     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jns97_kubernetes-dashboard(687204ad-153a-443e-adae-a421f528278a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97" podUID="687204ad-153a-443e-adae-a421f528278a"
	Dec 02 20:55:22 old-k8s-version-992336 kubelet[723]: I1202 20:55:22.050554     723 scope.go:117] "RemoveContainer" containerID="7d39d0d64f96064ac67f49d7b291ffc6a723235728102accde7c1367e964cd5e"
	Dec 02 20:55:22 old-k8s-version-992336 kubelet[723]: E1202 20:55:22.050882     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jns97_kubernetes-dashboard(687204ad-153a-443e-adae-a421f528278a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97" podUID="687204ad-153a-443e-adae-a421f528278a"
	Dec 02 20:55:31 old-k8s-version-992336 kubelet[723]: I1202 20:55:31.073448     723 scope.go:117] "RemoveContainer" containerID="e487bf30c0c3633ded0d035f4b5833104a5b2402f66102aa2b3b20b5d8cc9c68"
	Dec 02 20:55:33 old-k8s-version-992336 kubelet[723]: I1202 20:55:33.895257     723 scope.go:117] "RemoveContainer" containerID="7d39d0d64f96064ac67f49d7b291ffc6a723235728102accde7c1367e964cd5e"
	Dec 02 20:55:34 old-k8s-version-992336 kubelet[723]: I1202 20:55:34.084543     723 scope.go:117] "RemoveContainer" containerID="7d39d0d64f96064ac67f49d7b291ffc6a723235728102accde7c1367e964cd5e"
	Dec 02 20:55:34 old-k8s-version-992336 kubelet[723]: I1202 20:55:34.084732     723 scope.go:117] "RemoveContainer" containerID="bf29065d30f2a6e3fbd18c254a02294145f086b26e4171ce8fd09900fd813f1a"
	Dec 02 20:55:34 old-k8s-version-992336 kubelet[723]: E1202 20:55:34.085091     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jns97_kubernetes-dashboard(687204ad-153a-443e-adae-a421f528278a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97" podUID="687204ad-153a-443e-adae-a421f528278a"
	Dec 02 20:55:41 old-k8s-version-992336 kubelet[723]: I1202 20:55:41.901125     723 scope.go:117] "RemoveContainer" containerID="bf29065d30f2a6e3fbd18c254a02294145f086b26e4171ce8fd09900fd813f1a"
	Dec 02 20:55:41 old-k8s-version-992336 kubelet[723]: E1202 20:55:41.901480     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jns97_kubernetes-dashboard(687204ad-153a-443e-adae-a421f528278a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jns97" podUID="687204ad-153a-443e-adae-a421f528278a"
	Dec 02 20:55:52 old-k8s-version-992336 kubelet[723]: I1202 20:55:52.470572     723 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 02 20:55:52 old-k8s-version-992336 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 20:55:52 old-k8s-version-992336 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 20:55:52 old-k8s-version-992336 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 20:55:52 old-k8s-version-992336 systemd[1]: kubelet.service: Consumed 1.792s CPU time.
	
	
	==> kubernetes-dashboard [c6a55f74f0b2c40c941df4d57b1985d9f197f20a64448ec742c7becad69978f4] <==
	2025/12/02 20:55:16 Using namespace: kubernetes-dashboard
	2025/12/02 20:55:16 Using in-cluster config to connect to apiserver
	2025/12/02 20:55:16 Using secret token for csrf signing
	2025/12/02 20:55:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/02 20:55:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/02 20:55:16 Successful initial request to the apiserver, version: v1.28.0
	2025/12/02 20:55:16 Generating JWE encryption key
	2025/12/02 20:55:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/02 20:55:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/02 20:55:16 Initializing JWE encryption key from synchronized object
	2025/12/02 20:55:16 Creating in-cluster Sidecar client
	2025/12/02 20:55:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 20:55:16 Serving insecurely on HTTP port: 9090
	2025/12/02 20:55:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 20:55:16 Starting overwatch
	
	
	==> storage-provisioner [8679dcbceeeaac9a65dd46a7186f9e2f2fffc82bafe92915a3d128519f8498cd] <==
	I1202 20:55:31.125516       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 20:55:31.135188       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 20:55:31.135314       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1202 20:55:48.538512       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 20:55:48.538843       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"17fffbb9-16db-4d60-9564-e341806dca02", APIVersion:"v1", ResourceVersion:"619", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-992336_833ac1bd-06b6-4279-bf8f-2a470e08bae6 became leader
	I1202 20:55:48.540724       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-992336_833ac1bd-06b6-4279-bf8f-2a470e08bae6!
	I1202 20:55:48.641406       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-992336_833ac1bd-06b6-4279-bf8f-2a470e08bae6!
	
	
	==> storage-provisioner [e487bf30c0c3633ded0d035f4b5833104a5b2402f66102aa2b3b20b5d8cc9c68] <==
	I1202 20:55:00.322825       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1202 20:55:30.327608       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-992336 -n old-k8s-version-992336
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-992336 -n old-k8s-version-992336: exit status 2 (379.951741ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-992336 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (7.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-336331 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-336331 --alsologtostderr -v=1: exit status 80 (2.417476908s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-336331 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 20:56:27.689608  768410 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:56:27.689884  768410 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:56:27.689893  768410 out.go:374] Setting ErrFile to fd 2...
	I1202 20:56:27.689897  768410 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:56:27.690153  768410 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:56:27.690421  768410 out.go:368] Setting JSON to false
	I1202 20:56:27.690444  768410 mustload.go:66] Loading cluster: no-preload-336331
	I1202 20:56:27.690866  768410 config.go:182] Loaded profile config "no-preload-336331": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:56:27.691362  768410 cli_runner.go:164] Run: docker container inspect no-preload-336331 --format={{.State.Status}}
	I1202 20:56:27.712456  768410 host.go:66] Checking if "no-preload-336331" exists ...
	I1202 20:56:27.712736  768410 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:56:27.773190  768410 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-02 20:56:27.763170633 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:56:27.773835  768410 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-336331 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1202 20:56:27.775696  768410 out.go:179] * Pausing node no-preload-336331 ... 
	I1202 20:56:27.777016  768410 host.go:66] Checking if "no-preload-336331" exists ...
	I1202 20:56:27.777326  768410 ssh_runner.go:195] Run: systemctl --version
	I1202 20:56:27.777365  768410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-336331
	I1202 20:56:27.796805  768410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/no-preload-336331/id_rsa Username:docker}
	I1202 20:56:27.897593  768410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:56:27.911463  768410 pause.go:52] kubelet running: true
	I1202 20:56:27.911539  768410 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 20:56:28.069039  768410 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 20:56:28.069161  768410 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 20:56:28.142318  768410 cri.go:89] found id: "6a58ab9b9c0482bd5f103029c7c0f3bdb6d5c02e0fc49f59a43c1b17c375958e"
	I1202 20:56:28.142343  768410 cri.go:89] found id: "88167c0c5457270abf23d0e9c8ba2c26bd39e3fd35acbcc5be0ec9337db24e9e"
	I1202 20:56:28.142349  768410 cri.go:89] found id: "a8e5580b374ec483ae25a01dd060fd7f0ac21c7f4a3afc6999fc62d8ede79880"
	I1202 20:56:28.142353  768410 cri.go:89] found id: "80e9078d18ca5da464c12fd5d5b48d960e2c5867e07585bee911e523c9a0630a"
	I1202 20:56:28.142357  768410 cri.go:89] found id: "4ba2da46b3cf6cbb497d0561308e1e2541b679b2fee63afd57d239b2a5487d39"
	I1202 20:56:28.142362  768410 cri.go:89] found id: "fe483c8206ed4feb9f82c31650dd1c179edfd56fdbd85b46b0866b331f6ea99d"
	I1202 20:56:28.142366  768410 cri.go:89] found id: "8a39789ad0781128fb83397c05c270ff26c09bd32ec5d4c90b8ca4d3a01533cd"
	I1202 20:56:28.142370  768410 cri.go:89] found id: "cec9f1979d354143b12bba5938c36bf941dd1a2a9c5096761b95b27d36bc9e59"
	I1202 20:56:28.142393  768410 cri.go:89] found id: "9d960cc48cf5c1a7210c34cfa4e205107d9dd729104ed2798e71e12ba001d7ec"
	I1202 20:56:28.142402  768410 cri.go:89] found id: "a466626000385a894ca35d0cbbd705f0a7ea58df0bec6d3ee73e98444e45ee26"
	I1202 20:56:28.142406  768410 cri.go:89] found id: "392425906cb8929a82cf3b6a301d75ecc8a3f2afb4aca218a52e369092d206a5"
	I1202 20:56:28.142411  768410 cri.go:89] found id: ""
	I1202 20:56:28.142462  768410 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 20:56:28.155551  768410 retry.go:31] will retry after 187.685257ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:56:28Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:56:28.344102  768410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:56:28.358475  768410 pause.go:52] kubelet running: false
	I1202 20:56:28.358546  768410 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 20:56:28.501246  768410 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 20:56:28.501370  768410 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 20:56:28.575234  768410 cri.go:89] found id: "6a58ab9b9c0482bd5f103029c7c0f3bdb6d5c02e0fc49f59a43c1b17c375958e"
	I1202 20:56:28.575259  768410 cri.go:89] found id: "88167c0c5457270abf23d0e9c8ba2c26bd39e3fd35acbcc5be0ec9337db24e9e"
	I1202 20:56:28.575265  768410 cri.go:89] found id: "a8e5580b374ec483ae25a01dd060fd7f0ac21c7f4a3afc6999fc62d8ede79880"
	I1202 20:56:28.575270  768410 cri.go:89] found id: "80e9078d18ca5da464c12fd5d5b48d960e2c5867e07585bee911e523c9a0630a"
	I1202 20:56:28.575275  768410 cri.go:89] found id: "4ba2da46b3cf6cbb497d0561308e1e2541b679b2fee63afd57d239b2a5487d39"
	I1202 20:56:28.575279  768410 cri.go:89] found id: "fe483c8206ed4feb9f82c31650dd1c179edfd56fdbd85b46b0866b331f6ea99d"
	I1202 20:56:28.575283  768410 cri.go:89] found id: "8a39789ad0781128fb83397c05c270ff26c09bd32ec5d4c90b8ca4d3a01533cd"
	I1202 20:56:28.575287  768410 cri.go:89] found id: "cec9f1979d354143b12bba5938c36bf941dd1a2a9c5096761b95b27d36bc9e59"
	I1202 20:56:28.575302  768410 cri.go:89] found id: "9d960cc48cf5c1a7210c34cfa4e205107d9dd729104ed2798e71e12ba001d7ec"
	I1202 20:56:28.575315  768410 cri.go:89] found id: "a466626000385a894ca35d0cbbd705f0a7ea58df0bec6d3ee73e98444e45ee26"
	I1202 20:56:28.575320  768410 cri.go:89] found id: "392425906cb8929a82cf3b6a301d75ecc8a3f2afb4aca218a52e369092d206a5"
	I1202 20:56:28.575325  768410 cri.go:89] found id: ""
	I1202 20:56:28.575381  768410 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 20:56:28.587801  768410 retry.go:31] will retry after 486.69187ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:56:28Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:56:29.075620  768410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:56:29.089932  768410 pause.go:52] kubelet running: false
	I1202 20:56:29.090012  768410 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 20:56:29.231400  768410 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 20:56:29.231498  768410 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 20:56:29.303665  768410 cri.go:89] found id: "6a58ab9b9c0482bd5f103029c7c0f3bdb6d5c02e0fc49f59a43c1b17c375958e"
	I1202 20:56:29.303690  768410 cri.go:89] found id: "88167c0c5457270abf23d0e9c8ba2c26bd39e3fd35acbcc5be0ec9337db24e9e"
	I1202 20:56:29.303696  768410 cri.go:89] found id: "a8e5580b374ec483ae25a01dd060fd7f0ac21c7f4a3afc6999fc62d8ede79880"
	I1202 20:56:29.303702  768410 cri.go:89] found id: "80e9078d18ca5da464c12fd5d5b48d960e2c5867e07585bee911e523c9a0630a"
	I1202 20:56:29.303706  768410 cri.go:89] found id: "4ba2da46b3cf6cbb497d0561308e1e2541b679b2fee63afd57d239b2a5487d39"
	I1202 20:56:29.303712  768410 cri.go:89] found id: "fe483c8206ed4feb9f82c31650dd1c179edfd56fdbd85b46b0866b331f6ea99d"
	I1202 20:56:29.303717  768410 cri.go:89] found id: "8a39789ad0781128fb83397c05c270ff26c09bd32ec5d4c90b8ca4d3a01533cd"
	I1202 20:56:29.303721  768410 cri.go:89] found id: "cec9f1979d354143b12bba5938c36bf941dd1a2a9c5096761b95b27d36bc9e59"
	I1202 20:56:29.303725  768410 cri.go:89] found id: "9d960cc48cf5c1a7210c34cfa4e205107d9dd729104ed2798e71e12ba001d7ec"
	I1202 20:56:29.303733  768410 cri.go:89] found id: "a466626000385a894ca35d0cbbd705f0a7ea58df0bec6d3ee73e98444e45ee26"
	I1202 20:56:29.303738  768410 cri.go:89] found id: "392425906cb8929a82cf3b6a301d75ecc8a3f2afb4aca218a52e369092d206a5"
	I1202 20:56:29.303742  768410 cri.go:89] found id: ""
	I1202 20:56:29.303785  768410 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 20:56:29.316625  768410 retry.go:31] will retry after 459.572439ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:56:29Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:56:29.776410  768410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:56:29.790411  768410 pause.go:52] kubelet running: false
	I1202 20:56:29.790488  768410 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 20:56:29.941051  768410 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 20:56:29.941195  768410 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 20:56:30.012810  768410 cri.go:89] found id: "6a58ab9b9c0482bd5f103029c7c0f3bdb6d5c02e0fc49f59a43c1b17c375958e"
	I1202 20:56:30.012832  768410 cri.go:89] found id: "88167c0c5457270abf23d0e9c8ba2c26bd39e3fd35acbcc5be0ec9337db24e9e"
	I1202 20:56:30.012837  768410 cri.go:89] found id: "a8e5580b374ec483ae25a01dd060fd7f0ac21c7f4a3afc6999fc62d8ede79880"
	I1202 20:56:30.012842  768410 cri.go:89] found id: "80e9078d18ca5da464c12fd5d5b48d960e2c5867e07585bee911e523c9a0630a"
	I1202 20:56:30.012847  768410 cri.go:89] found id: "4ba2da46b3cf6cbb497d0561308e1e2541b679b2fee63afd57d239b2a5487d39"
	I1202 20:56:30.012851  768410 cri.go:89] found id: "fe483c8206ed4feb9f82c31650dd1c179edfd56fdbd85b46b0866b331f6ea99d"
	I1202 20:56:30.012855  768410 cri.go:89] found id: "8a39789ad0781128fb83397c05c270ff26c09bd32ec5d4c90b8ca4d3a01533cd"
	I1202 20:56:30.012859  768410 cri.go:89] found id: "cec9f1979d354143b12bba5938c36bf941dd1a2a9c5096761b95b27d36bc9e59"
	I1202 20:56:30.012863  768410 cri.go:89] found id: "9d960cc48cf5c1a7210c34cfa4e205107d9dd729104ed2798e71e12ba001d7ec"
	I1202 20:56:30.012870  768410 cri.go:89] found id: "a466626000385a894ca35d0cbbd705f0a7ea58df0bec6d3ee73e98444e45ee26"
	I1202 20:56:30.012874  768410 cri.go:89] found id: "392425906cb8929a82cf3b6a301d75ecc8a3f2afb4aca218a52e369092d206a5"
	I1202 20:56:30.012878  768410 cri.go:89] found id: ""
	I1202 20:56:30.012923  768410 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 20:56:30.028535  768410 out.go:203] 
	W1202 20:56:30.030041  768410 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:56:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:56:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 20:56:30.030075  768410 out.go:285] * 
	* 
	W1202 20:56:30.034873  768410 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 20:56:30.036444  768410 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-336331 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-336331
helpers_test.go:243: (dbg) docker inspect no-preload-336331:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5c0b97280754bca09a214a1209794eaed88bdb20761bc875787e6ff1daeba56e",
	        "Created": "2025-12-02T20:54:14.239653127Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 755084,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T20:55:31.061844503Z",
	            "FinishedAt": "2025-12-02T20:55:30.136527058Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/5c0b97280754bca09a214a1209794eaed88bdb20761bc875787e6ff1daeba56e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5c0b97280754bca09a214a1209794eaed88bdb20761bc875787e6ff1daeba56e/hostname",
	        "HostsPath": "/var/lib/docker/containers/5c0b97280754bca09a214a1209794eaed88bdb20761bc875787e6ff1daeba56e/hosts",
	        "LogPath": "/var/lib/docker/containers/5c0b97280754bca09a214a1209794eaed88bdb20761bc875787e6ff1daeba56e/5c0b97280754bca09a214a1209794eaed88bdb20761bc875787e6ff1daeba56e-json.log",
	        "Name": "/no-preload-336331",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-336331:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-336331",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5c0b97280754bca09a214a1209794eaed88bdb20761bc875787e6ff1daeba56e",
	                "LowerDir": "/var/lib/docker/overlay2/594362d957f037a0e8c8f90d32655c29146773c96403d1c3d09c40858d94140a-init/diff:/var/lib/docker/overlay2/49bb44e987885c48600d5ae7ad3c81cad82a00f38070c0460882c5746d9fae59/diff",
	                "MergedDir": "/var/lib/docker/overlay2/594362d957f037a0e8c8f90d32655c29146773c96403d1c3d09c40858d94140a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/594362d957f037a0e8c8f90d32655c29146773c96403d1c3d09c40858d94140a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/594362d957f037a0e8c8f90d32655c29146773c96403d1c3d09c40858d94140a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-336331",
	                "Source": "/var/lib/docker/volumes/no-preload-336331/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-336331",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-336331",
	                "name.minikube.sigs.k8s.io": "no-preload-336331",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "65a2e75519167fbc561bca923d1439e164447b61d1dfde6b46aaf79c359426ed",
	            "SandboxKey": "/var/run/docker/netns/65a2e7551916",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33503"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33504"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33507"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33505"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33506"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-336331": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be4fb772701bc21d00b8604cf864a912ac52112a68f7d1c80495359c23362a1c",
	                    "EndpointID": "3965c027c19be89aa4713f677d35eccaa82185930131aa0c66d5390c5517e84f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "06:8c:35:25:35:4c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-336331",
	                        "5c0b97280754"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-336331 -n no-preload-336331
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-336331 -n no-preload-336331: exit status 2 (357.596431ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-336331 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-336331 logs -n 25: (1.226365235s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p newest-cni-245604 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable metrics-server -p no-preload-336331 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ stop    │ -p no-preload-336331 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable metrics-server -p newest-cni-245604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-997805 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ stop    │ -p newest-cni-245604 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ stop    │ -p default-k8s-diff-port-997805 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable dashboard -p newest-cni-245604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p newest-cni-245604 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable dashboard -p no-preload-336331 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p no-preload-336331 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:56 UTC │
	│ image   │ newest-cni-245604 image list --format=json                                                                                                                                                                                                           │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ pause   │ -p newest-cni-245604 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-997805 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p default-k8s-diff-port-997805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ delete  │ -p newest-cni-245604                                                                                                                                                                                                                                 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ delete  │ -p newest-cni-245604                                                                                                                                                                                                                                 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ delete  │ -p disable-driver-mounts-234978                                                                                                                                                                                                                      │ disable-driver-mounts-234978 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p embed-certs-386191 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ image   │ old-k8s-version-992336 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ pause   │ -p old-k8s-version-992336 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ delete  │ -p old-k8s-version-992336                                                                                                                                                                                                                            │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:56 UTC │
	│ delete  │ -p old-k8s-version-992336                                                                                                                                                                                                                            │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ image   │ no-preload-336331 image list --format=json                                                                                                                                                                                                           │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ pause   │ -p no-preload-336331 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:55:49
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:55:49.973376  761851 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:55:49.973479  761851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:55:49.973486  761851 out.go:374] Setting ErrFile to fd 2...
	I1202 20:55:49.973492  761851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:55:49.973784  761851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:55:49.974402  761851 out.go:368] Setting JSON to false
	I1202 20:55:49.976053  761851 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9494,"bootTime":1764699456,"procs":379,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:55:49.976153  761851 start.go:143] virtualization: kvm guest
	I1202 20:55:49.979903  761851 out.go:179] * [embed-certs-386191] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 20:55:49.981563  761851 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:55:49.981711  761851 notify.go:221] Checking for updates...
	I1202 20:55:49.985961  761851 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:55:49.989444  761851 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:49.990856  761851 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 20:55:49.992198  761851 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:55:49.994165  761851 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:55:49.996734  761851 config.go:182] Loaded profile config "default-k8s-diff-port-997805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:55:49.996944  761851 config.go:182] Loaded profile config "no-preload-336331": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:55:49.997173  761851 config.go:182] Loaded profile config "old-k8s-version-992336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1202 20:55:49.997373  761851 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:55:50.033364  761851 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 20:55:50.033467  761851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:55:50.114622  761851 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 20:55:50.101227741 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:55:50.114779  761851 docker.go:319] overlay module found
	I1202 20:55:50.117537  761851 out.go:179] * Using the docker driver based on user configuration
	I1202 20:55:50.119145  761851 start.go:309] selected driver: docker
	I1202 20:55:50.119167  761851 start.go:927] validating driver "docker" against <nil>
	I1202 20:55:50.119183  761851 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:55:50.120035  761851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:55:50.211212  761851 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 20:55:50.198488456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:55:50.211445  761851 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 20:55:50.211790  761851 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:55:50.214433  761851 out.go:179] * Using Docker driver with root privileges
	I1202 20:55:50.218243  761851 cni.go:84] Creating CNI manager for ""
	I1202 20:55:50.218353  761851 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:50.218375  761851 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 20:55:50.218508  761851 start.go:353] cluster config:
	{Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:55:50.220045  761851 out.go:179] * Starting "embed-certs-386191" primary control-plane node in "embed-certs-386191" cluster
	I1202 20:55:50.221707  761851 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 20:55:50.223105  761851 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 20:55:50.224334  761851 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:50.224383  761851 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 20:55:50.224379  761851 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 20:55:50.224423  761851 cache.go:65] Caching tarball of preloaded images
	I1202 20:55:50.224531  761851 preload.go:238] Found /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 20:55:50.224544  761851 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 20:55:50.224682  761851 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/config.json ...
	I1202 20:55:50.224706  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/config.json: {Name:mk4df57c1427e88de36c6d265cf4b7b9447ba4a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:50.254982  761851 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 20:55:50.255008  761851 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 20:55:50.255030  761851 cache.go:243] Successfully downloaded all kic artifacts
	I1202 20:55:50.255092  761851 start.go:360] acquireMachinesLock for embed-certs-386191: {Name:mk07b451c8d7193712ed79603183bf03b141f2ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:55:50.255209  761851 start.go:364] duration metric: took 90.207µs to acquireMachinesLock for "embed-certs-386191"
	I1202 20:55:50.255244  761851 start.go:93] Provisioning new machine with config: &{Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:55:50.255372  761851 start.go:125] createHost starting for "" (driver="docker")
	W1202 20:55:47.478474  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:55:49.480219  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:55:48.658867  759377 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:55:48.658893  759377 machine.go:97] duration metric: took 4.363922202s to provisionDockerMachine
	I1202 20:55:48.658908  759377 start.go:293] postStartSetup for "default-k8s-diff-port-997805" (driver="docker")
	I1202 20:55:48.659934  759377 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:55:48.660266  759377 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:55:48.660319  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:48.684270  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:48.800470  759377 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:55:48.806594  759377 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:55:48.806641  759377 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:55:48.806659  759377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 20:55:48.806723  759377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 20:55:48.806832  759377 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem -> 4110322.pem in /etc/ssl/certs
	I1202 20:55:48.807095  759377 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:55:48.817526  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:55:48.843728  759377 start.go:296] duration metric: took 183.799228ms for postStartSetup
	I1202 20:55:48.843844  759377 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:55:48.843886  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:48.867562  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:48.976679  759377 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:55:48.983737  759377 fix.go:56] duration metric: took 5.130755935s for fixHost
	I1202 20:55:48.983779  759377 start.go:83] releasing machines lock for "default-k8s-diff-port-997805", held for 5.130814844s
	I1202 20:55:48.983853  759377 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-997805
	I1202 20:55:49.008951  759377 ssh_runner.go:195] Run: cat /version.json
	I1202 20:55:49.009046  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:49.009048  759377 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:55:49.009136  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:49.034693  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:49.035313  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:49.217584  759377 ssh_runner.go:195] Run: systemctl --version
	I1202 20:55:49.226948  759377 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:55:49.280525  759377 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:55:49.287579  759377 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:55:49.287663  759377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:55:49.299593  759377 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 20:55:49.299624  759377 start.go:496] detecting cgroup driver to use...
	I1202 20:55:49.299667  759377 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 20:55:49.299717  759377 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:55:49.321346  759377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:55:49.340202  759377 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:55:49.340276  759377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:55:49.364580  759377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:55:49.384570  759377 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:55:49.507838  759377 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:55:49.636982  759377 docker.go:234] disabling docker service ...
	I1202 20:55:49.637124  759377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:55:49.660429  759377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:55:49.676580  759377 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:55:49.805919  759377 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:55:49.932552  759377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:55:49.950808  759377 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:55:49.973269  759377 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:55:49.973378  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:49.987382  759377 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 20:55:49.987446  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.001518  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.015622  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.029383  759377 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:55:50.042396  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.055622  759377 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.069706  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.082027  759377 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:55:50.093878  759377 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:55:50.106172  759377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:50.241651  759377 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:55:51.093615  759377 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:55:51.093712  759377 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:55:51.098803  759377 start.go:564] Will wait 60s for crictl version
	I1202 20:55:51.098893  759377 ssh_runner.go:195] Run: which crictl
	I1202 20:55:51.103616  759377 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:55:51.134275  759377 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:55:51.134365  759377 ssh_runner.go:195] Run: crio --version
	I1202 20:55:51.176508  759377 ssh_runner.go:195] Run: crio --version
	I1202 20:55:51.212619  759377 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 20:55:51.213954  759377 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-997805 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:55:51.239456  759377 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 20:55:51.247008  759377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:51.258836  759377 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-997805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-997805 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:55:51.259035  759377 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:51.259113  759377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:51.305184  759377 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:51.305211  759377 crio.go:433] Images already preloaded, skipping extraction
	I1202 20:55:51.305279  759377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:51.336679  759377 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:51.336721  759377 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:55:51.336736  759377 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1202 20:55:51.336850  759377 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-997805 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-997805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:55:51.336915  759377 ssh_runner.go:195] Run: crio config
	I1202 20:55:51.395485  759377 cni.go:84] Creating CNI manager for ""
	I1202 20:55:51.395526  759377 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:51.395553  759377 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:55:51.395590  759377 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-997805 NodeName:default-k8s-diff-port-997805 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:55:51.395786  759377 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-997805"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:55:51.395870  759377 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 20:55:51.406735  759377 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:55:51.406822  759377 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:55:51.416228  759377 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1202 20:55:51.430748  759377 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:55:51.448244  759377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1202 20:55:51.463482  759377 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 20:55:51.467906  759377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:51.480393  759377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:51.588830  759377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:55:51.618253  759377 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805 for IP: 192.168.85.2
	I1202 20:55:51.618282  759377 certs.go:195] generating shared ca certs ...
	I1202 20:55:51.618303  759377 certs.go:227] acquiring lock for ca certs: {Name:mkea11924193dd4dc459e16d70207bde383196df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:51.618470  759377 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key
	I1202 20:55:51.618534  759377 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key
	I1202 20:55:51.618547  759377 certs.go:257] generating profile certs ...
	I1202 20:55:51.618661  759377 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/client.key
	I1202 20:55:51.618759  759377 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/apiserver.key.36ffc693
	I1202 20:55:51.618817  759377 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/proxy-client.key
	I1202 20:55:51.618958  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem (1338 bytes)
	W1202 20:55:51.619000  759377 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032_empty.pem, impossibly tiny 0 bytes
	I1202 20:55:51.619010  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 20:55:51.619043  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:55:51.619087  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:55:51.619120  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem (1675 bytes)
	I1202 20:55:51.619173  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:55:51.619958  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:55:51.642775  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:55:51.668086  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:55:51.695111  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:55:51.723055  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 20:55:51.757108  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 20:55:51.782582  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:55:51.803028  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 20:55:51.823897  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:55:51.845621  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem --> /usr/share/ca-certificates/411032.pem (1338 bytes)
	I1202 20:55:51.866855  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /usr/share/ca-certificates/4110322.pem (1708 bytes)
	I1202 20:55:51.890515  759377 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:55:51.906355  759377 ssh_runner.go:195] Run: openssl version
	I1202 20:55:51.914259  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110322.pem && ln -fs /usr/share/ca-certificates/4110322.pem /etc/ssl/certs/4110322.pem"
	I1202 20:55:51.925148  759377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110322.pem
	I1202 20:55:51.929800  759377 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 20:13 /usr/share/ca-certificates/4110322.pem
	I1202 20:55:51.929869  759377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110322.pem
	I1202 20:55:51.972279  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4110322.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:55:51.983418  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:55:51.993784  759377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:51.999249  759377 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:51.999316  759377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:52.049373  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:55:52.061515  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/411032.pem && ln -fs /usr/share/ca-certificates/411032.pem /etc/ssl/certs/411032.pem"
	I1202 20:55:52.072126  759377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/411032.pem
	I1202 20:55:52.076862  759377 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 20:13 /usr/share/ca-certificates/411032.pem
	I1202 20:55:52.076956  759377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/411032.pem
	I1202 20:55:52.126642  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/411032.pem /etc/ssl/certs/51391683.0"
	I1202 20:55:52.138458  759377 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:55:52.143543  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 20:55:52.198225  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 20:55:52.254754  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 20:55:52.319722  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 20:55:52.380903  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 20:55:52.422910  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 20:55:52.483325  759377 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-997805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-997805 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:55:52.483438  759377 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:55:52.483499  759377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:55:52.522620  759377 cri.go:89] found id: "25e14e8feafb6c0d6c5261cd5e507b812e39fcb9c7e196408fe69d780ebbcd1d"
	I1202 20:55:52.522651  759377 cri.go:89] found id: "0c7e2844e2dbdbf5b9ffe8bf4e8d07304b64b059e3d4c965c2010c5d8a39c499"
	I1202 20:55:52.522657  759377 cri.go:89] found id: "81b0ec87511a05a7501d98eb27c52f69372a4b30c4ea523db262c140f9b68cd3"
	I1202 20:55:52.522662  759377 cri.go:89] found id: "e13e6c4d6c5da602ac2e1402a7612205c5a0ceffdccf7618da3035e562a7d9d3"
	I1202 20:55:52.522667  759377 cri.go:89] found id: ""
	I1202 20:55:52.522718  759377 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 20:55:52.539274  759377 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:55:52Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:55:52.539358  759377 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:55:52.550759  759377 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 20:55:52.550911  759377 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 20:55:52.550977  759377 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 20:55:52.562444  759377 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:55:52.563380  759377 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-997805" does not appear in /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:52.563867  759377 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-407427/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-997805" cluster setting kubeconfig missing "default-k8s-diff-port-997805" context setting]
	I1202 20:55:52.564708  759377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:52.567122  759377 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 20:55:52.580423  759377 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1202 20:55:52.580475  759377 kubeadm.go:602] duration metric: took 29.545337ms to restartPrimaryControlPlane
	I1202 20:55:52.580492  759377 kubeadm.go:403] duration metric: took 97.179033ms to StartCluster
	I1202 20:55:52.580515  759377 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:52.580624  759377 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:52.582395  759377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:52.582737  759377 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:55:52.582982  759377 config.go:182] Loaded profile config "default-k8s-diff-port-997805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:55:52.583044  759377 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:55:52.583145  759377 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:52.583167  759377 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-997805"
	W1202 20:55:52.583180  759377 addons.go:248] addon storage-provisioner should already be in state true
	I1202 20:55:52.583208  759377 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:52.583706  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.583924  759377 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:52.583949  759377 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-997805"
	W1202 20:55:52.583958  759377 addons.go:248] addon dashboard should already be in state true
	I1202 20:55:52.583987  759377 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:52.584470  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.584621  759377 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:52.584638  759377 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-997805"
	I1202 20:55:52.584909  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.590138  759377 out.go:179] * Verifying Kubernetes components...
	I1202 20:55:52.591985  759377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:52.621520  759377 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-997805"
	W1202 20:55:52.621550  759377 addons.go:248] addon default-storageclass should already be in state true
	I1202 20:55:52.621581  759377 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:52.621962  759377 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 20:55:52.621973  759377 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:55:52.622100  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.623522  759377 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:52.623542  759377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:55:52.623861  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:52.629794  759377 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 20:55:52.631326  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 20:55:52.631354  759377 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 20:55:52.631441  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:52.650454  759377 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:52.650440  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:52.650477  759377 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:55:52.650539  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:52.664697  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:52.687593  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:52.782783  759377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:55:52.788136  759377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:52.796186  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 20:55:52.796227  759377 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 20:55:52.805245  759377 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-997805" to be "Ready" ...
	I1202 20:55:52.813493  759377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:52.816061  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 20:55:52.816120  759377 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 20:55:52.836609  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 20:55:52.836641  759377 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 20:55:52.858664  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 20:55:52.858695  759377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 20:55:52.881817  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 20:55:52.881850  759377 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 20:55:52.898249  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 20:55:52.898282  759377 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 20:55:52.916317  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 20:55:52.916341  759377 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 20:55:52.934311  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 20:55:52.934421  759377 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 20:55:52.954130  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:55:52.954156  759377 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 20:55:52.971994  759377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:55:50.259730  761851 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1202 20:55:50.260957  761851 start.go:159] libmachine.API.Create for "embed-certs-386191" (driver="docker")
	I1202 20:55:50.261018  761851 client.go:173] LocalClient.Create starting
	I1202 20:55:50.261131  761851 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem
	I1202 20:55:50.261175  761851 main.go:143] libmachine: Decoding PEM data...
	I1202 20:55:50.261199  761851 main.go:143] libmachine: Parsing certificate...
	I1202 20:55:50.261293  761851 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem
	I1202 20:55:50.261321  761851 main.go:143] libmachine: Decoding PEM data...
	I1202 20:55:50.261336  761851 main.go:143] libmachine: Parsing certificate...
	I1202 20:55:50.261828  761851 cli_runner.go:164] Run: docker network inspect embed-certs-386191 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 20:55:50.287353  761851 cli_runner.go:211] docker network inspect embed-certs-386191 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 20:55:50.287436  761851 network_create.go:284] running [docker network inspect embed-certs-386191] to gather additional debugging logs...
	I1202 20:55:50.287467  761851 cli_runner.go:164] Run: docker network inspect embed-certs-386191
	W1202 20:55:50.313420  761851 cli_runner.go:211] docker network inspect embed-certs-386191 returned with exit code 1
	I1202 20:55:50.313458  761851 network_create.go:287] error running [docker network inspect embed-certs-386191]: docker network inspect embed-certs-386191: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-386191 not found
	I1202 20:55:50.313493  761851 network_create.go:289] output of [docker network inspect embed-certs-386191]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-386191 not found
	
	** /stderr **
	I1202 20:55:50.313695  761851 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:55:50.339597  761851 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-acf081edf266 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:04:c0:60:47:62} reservation:<nil>}
	I1202 20:55:50.340759  761851 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9623a21fb225 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:fc:8b:40:15:1b} reservation:<nil>}
	I1202 20:55:50.341559  761851 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f2b79e7e26a5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:c7:f4:38:1c:32} reservation:<nil>}
	I1202 20:55:50.342581  761851 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-be4fb772701b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:87:5f:38:96:b7} reservation:<nil>}
	I1202 20:55:50.343861  761851 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-13fe483902b9 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:a2:a4:21:b2:62:5a} reservation:<nil>}
	I1202 20:55:50.344785  761851 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-65ab470fa0e2 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:16:23:28:7c:c5:24} reservation:<nil>}
	I1202 20:55:50.346012  761851 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fb66a0}
	I1202 20:55:50.346044  761851 network_create.go:124] attempt to create docker network embed-certs-386191 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1202 20:55:50.346142  761851 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-386191 embed-certs-386191
	I1202 20:55:50.449757  761851 network_create.go:108] docker network embed-certs-386191 192.168.103.0/24 created
	I1202 20:55:50.449812  761851 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-386191" container
	I1202 20:55:50.449912  761851 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 20:55:50.476319  761851 cli_runner.go:164] Run: docker volume create embed-certs-386191 --label name.minikube.sigs.k8s.io=embed-certs-386191 --label created_by.minikube.sigs.k8s.io=true
	I1202 20:55:50.544287  761851 oci.go:103] Successfully created a docker volume embed-certs-386191
	I1202 20:55:50.544384  761851 cli_runner.go:164] Run: docker run --rm --name embed-certs-386191-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-386191 --entrypoint /usr/bin/test -v embed-certs-386191:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 20:55:51.390297  761851 oci.go:107] Successfully prepared a docker volume embed-certs-386191
	I1202 20:55:51.390398  761851 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:51.390416  761851 kic.go:194] Starting extracting preloaded images to volume ...
	I1202 20:55:51.390490  761851 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-386191:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	W1202 20:55:51.979014  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:55:54.048006  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:55:54.222552  759377 node_ready.go:49] node "default-k8s-diff-port-997805" is "Ready"
	I1202 20:55:54.222597  759377 node_ready.go:38] duration metric: took 1.417304277s for node "default-k8s-diff-port-997805" to be "Ready" ...
	I1202 20:55:54.222616  759377 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:55:54.222680  759377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:55:55.521273  759377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.733090646s)
	I1202 20:55:55.521348  759377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.707827699s)
	I1202 20:55:55.956240  759377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.984189677s)
	I1202 20:55:55.956260  759377 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.733551247s)
	I1202 20:55:55.956296  759377 api_server.go:72] duration metric: took 3.373517458s to wait for apiserver process to appear ...
	I1202 20:55:55.956305  759377 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:55:55.956329  759377 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1202 20:55:55.957591  759377 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-997805 addons enable metrics-server
	
	I1202 20:55:55.960080  759377 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1202 20:55:55.961425  759377 addons.go:530] duration metric: took 3.378380909s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1202 20:55:55.963108  759377 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 20:55:55.963149  759377 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 20:55:56.456815  759377 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1202 20:55:56.464867  759377 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1202 20:55:56.466374  759377 api_server.go:141] control plane version: v1.34.2
	I1202 20:55:56.466405  759377 api_server.go:131] duration metric: took 510.092ms to wait for apiserver health ...
	I1202 20:55:56.466417  759377 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:55:56.470286  759377 system_pods.go:59] 8 kube-system pods found
	I1202 20:55:56.470321  759377 system_pods.go:61] "coredns-66bc5c9577-jrln7" [37de7399-6357-4f08-9240-fc9e0d884f47] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:55:56.470336  759377 system_pods.go:61] "etcd-default-k8s-diff-port-997805" [446321a2-6abf-495a-8198-da2e43d8c18d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:55:56.470354  759377 system_pods.go:61] "kindnet-rzqpn" [eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 20:55:56.470364  759377 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-997805" [04c7ea7b-9b4e-44ee-8148-107daccf6b7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:55:56.470376  759377 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-997805" [65af3257-fa45-4d8c-bab3-46c0390ab8df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:55:56.470395  759377 system_pods.go:61] "kube-proxy-s2jpn" [407f6b3c-8d8b-47b0-b994-c061eedc6420] Running
	I1202 20:55:56.470403  759377 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-997805" [803a952f-54ba-4326-b70f-907fc7db368e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:55:56.470411  759377 system_pods.go:61] "storage-provisioner" [08893b97-1192-4f4e-8636-8f2ba82c853d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:55:56.470419  759377 system_pods.go:74] duration metric: took 3.994668ms to wait for pod list to return data ...
	I1202 20:55:56.470434  759377 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:55:56.472796  759377 default_sa.go:45] found service account: "default"
	I1202 20:55:56.472821  759377 default_sa.go:55] duration metric: took 2.376879ms for default service account to be created ...
	I1202 20:55:56.472832  759377 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 20:55:56.476530  759377 system_pods.go:86] 8 kube-system pods found
	I1202 20:55:56.476568  759377 system_pods.go:89] "coredns-66bc5c9577-jrln7" [37de7399-6357-4f08-9240-fc9e0d884f47] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:55:56.476586  759377 system_pods.go:89] "etcd-default-k8s-diff-port-997805" [446321a2-6abf-495a-8198-da2e43d8c18d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:55:56.476598  759377 system_pods.go:89] "kindnet-rzqpn" [eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 20:55:56.476611  759377 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-997805" [04c7ea7b-9b4e-44ee-8148-107daccf6b7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:55:56.476622  759377 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-997805" [65af3257-fa45-4d8c-bab3-46c0390ab8df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:55:56.476636  759377 system_pods.go:89] "kube-proxy-s2jpn" [407f6b3c-8d8b-47b0-b994-c061eedc6420] Running
	I1202 20:55:56.476644  759377 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-997805" [803a952f-54ba-4326-b70f-907fc7db368e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:55:56.476652  759377 system_pods.go:89] "storage-provisioner" [08893b97-1192-4f4e-8636-8f2ba82c853d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:55:56.476666  759377 system_pods.go:126] duration metric: took 3.826088ms to wait for k8s-apps to be running ...
	I1202 20:55:56.476679  759377 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 20:55:56.476731  759377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:55:56.496595  759377 system_svc.go:56] duration metric: took 19.904103ms WaitForService to wait for kubelet
	I1202 20:55:56.496628  759377 kubeadm.go:587] duration metric: took 3.913848958s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:55:56.496651  759377 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:55:56.501320  759377 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 20:55:56.501357  759377 node_conditions.go:123] node cpu capacity is 8
	I1202 20:55:56.501378  759377 node_conditions.go:105] duration metric: took 4.719966ms to run NodePressure ...
	I1202 20:55:56.501394  759377 start.go:242] waiting for startup goroutines ...
	I1202 20:55:56.501406  759377 start.go:247] waiting for cluster config update ...
	I1202 20:55:56.501422  759377 start.go:256] writing updated cluster config ...
	I1202 20:55:56.501764  759377 ssh_runner.go:195] Run: rm -f paused
	I1202 20:55:56.507506  759377 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:55:56.511978  759377 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jrln7" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 20:55:58.518638  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:55:55.882395  761851 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-386191:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.491855191s)
	I1202 20:55:55.882432  761851 kic.go:203] duration metric: took 4.49201135s to extract preloaded images to volume ...
	W1202 20:55:55.882649  761851 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1202 20:55:55.882730  761851 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1202 20:55:55.882796  761851 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 20:55:55.970786  761851 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-386191 --name embed-certs-386191 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-386191 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-386191 --network embed-certs-386191 --ip 192.168.103.2 --volume embed-certs-386191:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 20:55:56.322797  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Running}}
	I1202 20:55:56.346318  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:55:56.369508  761851 cli_runner.go:164] Run: docker exec embed-certs-386191 stat /var/lib/dpkg/alternatives/iptables
	I1202 20:55:56.426161  761851 oci.go:144] the created container "embed-certs-386191" has a running status.
	I1202 20:55:56.426198  761851 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa...
	I1202 20:55:56.605690  761851 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 20:55:56.639247  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:55:56.661049  761851 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 20:55:56.661086  761851 kic_runner.go:114] Args: [docker exec --privileged embed-certs-386191 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 20:55:56.743919  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:55:56.771200  761851 machine.go:94] provisionDockerMachine start ...
	I1202 20:55:56.771338  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:56.796209  761851 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:56.796568  761851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33513 <nil> <nil>}
	I1202 20:55:56.796593  761851 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:55:56.950615  761851 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-386191
	
	I1202 20:55:56.950657  761851 ubuntu.go:182] provisioning hostname "embed-certs-386191"
	I1202 20:55:56.950733  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:56.973211  761851 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:56.973537  761851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33513 <nil> <nil>}
	I1202 20:55:56.973561  761851 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-386191 && echo "embed-certs-386191" | sudo tee /etc/hostname
	I1202 20:55:57.141391  761851 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-386191
	
	I1202 20:55:57.141500  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:57.162911  761851 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:57.163198  761851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33513 <nil> <nil>}
	I1202 20:55:57.163228  761851 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-386191' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-386191/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-386191' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:55:57.310513  761851 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:55:57.310553  761851 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-407427/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-407427/.minikube}
	I1202 20:55:57.310589  761851 ubuntu.go:190] setting up certificates
	I1202 20:55:57.310609  761851 provision.go:84] configureAuth start
	I1202 20:55:57.310699  761851 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-386191
	I1202 20:55:57.331293  761851 provision.go:143] copyHostCerts
	I1202 20:55:57.331361  761851 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem, removing ...
	I1202 20:55:57.331377  761851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem
	I1202 20:55:57.331457  761851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem (1123 bytes)
	I1202 20:55:57.331608  761851 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem, removing ...
	I1202 20:55:57.331619  761851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem
	I1202 20:55:57.331661  761851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem (1675 bytes)
	I1202 20:55:57.331806  761851 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem, removing ...
	I1202 20:55:57.331820  761851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem
	I1202 20:55:57.331861  761851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem (1082 bytes)
	I1202 20:55:57.331969  761851 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem org=jenkins.embed-certs-386191 san=[127.0.0.1 192.168.103.2 embed-certs-386191 localhost minikube]
	I1202 20:55:57.478343  761851 provision.go:177] copyRemoteCerts
	I1202 20:55:57.478412  761851 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:55:57.478461  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:57.503684  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:55:57.613653  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:55:57.638025  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1202 20:55:57.660295  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 20:55:57.684474  761851 provision.go:87] duration metric: took 373.842939ms to configureAuth
	I1202 20:55:57.684512  761851 ubuntu.go:206] setting minikube options for container-runtime
	I1202 20:55:57.684722  761851 config.go:182] Loaded profile config "embed-certs-386191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:55:57.684859  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:57.705791  761851 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:57.706104  761851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33513 <nil> <nil>}
	I1202 20:55:57.706127  761851 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:55:58.017837  761851 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:55:58.017867  761851 machine.go:97] duration metric: took 1.246644154s to provisionDockerMachine
	I1202 20:55:58.017881  761851 client.go:176] duration metric: took 7.756854866s to LocalClient.Create
	I1202 20:55:58.017904  761851 start.go:167] duration metric: took 7.756953433s to libmachine.API.Create "embed-certs-386191"
	I1202 20:55:58.017914  761851 start.go:293] postStartSetup for "embed-certs-386191" (driver="docker")
	I1202 20:55:58.017926  761851 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:55:58.017993  761851 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:55:58.018051  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:58.040966  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:55:58.164646  761851 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:55:58.169173  761851 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:55:58.169218  761851 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:55:58.169234  761851 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 20:55:58.169292  761851 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 20:55:58.169398  761851 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem -> 4110322.pem in /etc/ssl/certs
	I1202 20:55:58.169534  761851 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:55:58.178343  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:55:58.201537  761851 start.go:296] duration metric: took 183.605841ms for postStartSetup
	I1202 20:55:58.201980  761851 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-386191
	I1202 20:55:58.222381  761851 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/config.json ...
	I1202 20:55:58.222725  761851 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:55:58.222779  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:58.246974  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:55:58.349308  761851 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:55:58.354335  761851 start.go:128] duration metric: took 8.098942472s to createHost
	I1202 20:55:58.354367  761851 start.go:83] releasing machines lock for "embed-certs-386191", held for 8.099141281s
	I1202 20:55:58.354452  761851 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-386191
	I1202 20:55:58.375692  761851 ssh_runner.go:195] Run: cat /version.json
	I1202 20:55:58.375743  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:58.375778  761851 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:55:58.375875  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:58.399444  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:55:58.401096  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:55:58.567709  761851 ssh_runner.go:195] Run: systemctl --version
	I1202 20:55:58.576291  761851 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:55:58.616262  761851 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:55:58.621961  761851 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:55:58.622044  761851 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:55:58.651183  761851 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 20:55:58.651217  761851 start.go:496] detecting cgroup driver to use...
	I1202 20:55:58.651265  761851 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 20:55:58.651331  761851 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:55:58.670441  761851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:55:58.684478  761851 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:55:58.684542  761851 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:55:58.704480  761851 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:55:58.725624  761851 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:55:58.831744  761851 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:55:58.927526  761851 docker.go:234] disabling docker service ...
	I1202 20:55:58.927588  761851 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:55:58.947085  761851 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:55:58.961716  761851 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:55:59.059830  761851 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:55:59.155836  761851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:55:59.170575  761851 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:55:59.187647  761851 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:55:59.187711  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.199691  761851 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 20:55:59.199752  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.210377  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.221666  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.233039  761851 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:55:59.242836  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.252564  761851 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.268580  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.279302  761851 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:55:59.288550  761851 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:55:59.297166  761851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:59.384478  761851 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:55:59.534012  761851 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:55:59.534100  761851 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:55:59.538865  761851 start.go:564] Will wait 60s for crictl version
	I1202 20:55:59.538929  761851 ssh_runner.go:195] Run: which crictl
	I1202 20:55:59.542822  761851 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:55:59.570175  761851 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:55:59.570275  761851 ssh_runner.go:195] Run: crio --version
	I1202 20:55:59.600365  761851 ssh_runner.go:195] Run: crio --version
	I1202 20:55:59.632281  761851 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 20:55:59.633569  761851 cli_runner.go:164] Run: docker network inspect embed-certs-386191 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:55:59.653989  761851 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1202 20:55:59.659705  761851 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:59.673939  761851 kubeadm.go:884] updating cluster {Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:55:59.674148  761851 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:59.674231  761851 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:59.721572  761851 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:59.721623  761851 crio.go:433] Images already preloaded, skipping extraction
	I1202 20:55:59.721807  761851 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:59.763726  761851 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:59.763753  761851 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:55:59.763763  761851 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1202 20:55:59.763877  761851 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-386191 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:55:59.763974  761851 ssh_runner.go:195] Run: crio config
	I1202 20:55:59.830764  761851 cni.go:84] Creating CNI manager for ""
	I1202 20:55:59.830790  761851 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:59.830809  761851 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:55:59.830832  761851 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-386191 NodeName:embed-certs-386191 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:55:59.830950  761851 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-386191"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:55:59.831035  761851 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 20:55:59.841880  761851 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:55:59.841954  761851 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:55:59.852027  761851 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1202 20:55:59.869099  761851 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:55:59.889821  761851 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1202 20:55:59.907811  761851 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1202 20:55:59.913347  761851 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:59.927373  761851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W1202 20:55:56.478639  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:55:58.978346  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:56:00.050556  761851 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:56:00.077300  761851 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191 for IP: 192.168.103.2
	I1202 20:56:00.077325  761851 certs.go:195] generating shared ca certs ...
	I1202 20:56:00.077348  761851 certs.go:227] acquiring lock for ca certs: {Name:mkea11924193dd4dc459e16d70207bde383196df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.077530  761851 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key
	I1202 20:56:00.077575  761851 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key
	I1202 20:56:00.077588  761851 certs.go:257] generating profile certs ...
	I1202 20:56:00.077664  761851 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.key
	I1202 20:56:00.077682  761851 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.crt with IP's: []
	I1202 20:56:00.252632  761851 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.crt ...
	I1202 20:56:00.252663  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.crt: {Name:mk9d10e4646efb676095250174819771b143a8ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.252877  761851 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.key ...
	I1202 20:56:00.252896  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.key: {Name:mk09798c33ea1ea9f8eb08ebf47349e244c0760e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.253023  761851 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key.1b423d29
	I1202 20:56:00.253048  761851 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt.1b423d29 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1202 20:56:00.432017  761851 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt.1b423d29 ...
	I1202 20:56:00.432052  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt.1b423d29: {Name:mk6d91134ec48be46c0e886b478e71e1794c3cdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.432278  761851 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key.1b423d29 ...
	I1202 20:56:00.432302  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key.1b423d29: {Name:mk97fa0403fe534a503bf999364704991b597622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.432413  761851 certs.go:382] copying /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt.1b423d29 -> /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt
	I1202 20:56:00.432512  761851 certs.go:386] copying /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key.1b423d29 -> /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key
	I1202 20:56:00.432593  761851 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.key
	I1202 20:56:00.432619  761851 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.crt with IP's: []
	I1202 20:56:00.527766  761851 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.crt ...
	I1202 20:56:00.527802  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.crt: {Name:mke9848302a1327d00a26fb35bc8d56284a1ca08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.528029  761851 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.key ...
	I1202 20:56:00.528053  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.key: {Name:mk5b412430aa6855d80ede6a2641ba2256c9a484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.528324  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem (1338 bytes)
	W1202 20:56:00.528374  761851 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032_empty.pem, impossibly tiny 0 bytes
	I1202 20:56:00.528390  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 20:56:00.528423  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:56:00.528455  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:56:00.528493  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem (1675 bytes)
	I1202 20:56:00.528552  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:56:00.529432  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:56:00.554691  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:56:00.580499  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:56:00.606002  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:56:00.630389  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1202 20:56:00.655553  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 20:56:00.679419  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:56:00.704325  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 20:56:00.729255  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /usr/share/ca-certificates/4110322.pem (1708 bytes)
	I1202 20:56:00.757910  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:56:00.782959  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem --> /usr/share/ca-certificates/411032.pem (1338 bytes)
	I1202 20:56:00.808564  761851 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:56:00.828291  761851 ssh_runner.go:195] Run: openssl version
	I1202 20:56:00.836796  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110322.pem && ln -fs /usr/share/ca-certificates/4110322.pem /etc/ssl/certs/4110322.pem"
	I1202 20:56:00.848469  761851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110322.pem
	I1202 20:56:00.853715  761851 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 20:13 /usr/share/ca-certificates/4110322.pem
	I1202 20:56:00.853790  761851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110322.pem
	I1202 20:56:00.905576  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4110322.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:56:00.918463  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:56:00.930339  761851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:56:00.935452  761851 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:56:00.935522  761851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:56:00.990051  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:56:01.002960  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/411032.pem && ln -fs /usr/share/ca-certificates/411032.pem /etc/ssl/certs/411032.pem"
	I1202 20:56:01.013994  761851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/411032.pem
	I1202 20:56:01.019737  761851 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 20:13 /usr/share/ca-certificates/411032.pem
	I1202 20:56:01.019798  761851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/411032.pem
	I1202 20:56:01.062700  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/411032.pem /etc/ssl/certs/51391683.0"
	I1202 20:56:01.074487  761851 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:56:01.079958  761851 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 20:56:01.080033  761851 kubeadm.go:401] StartCluster: {Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:56:01.080164  761851 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:56:01.080231  761851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:56:01.119713  761851 cri.go:89] found id: ""
	I1202 20:56:01.122354  761851 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:56:01.160024  761851 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 20:56:01.174466  761851 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 20:56:01.174517  761851 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 20:56:01.186198  761851 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 20:56:01.186294  761851 kubeadm.go:158] found existing configuration files:
	
	I1202 20:56:01.186361  761851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 20:56:01.201548  761851 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 20:56:01.201623  761851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 20:56:01.214153  761851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 20:56:01.225107  761851 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 20:56:01.225225  761851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 20:56:01.236050  761851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 20:56:01.247714  761851 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 20:56:01.247785  761851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 20:56:01.259129  761851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 20:56:01.270914  761851 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 20:56:01.270981  761851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 20:56:01.283320  761851 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 20:56:01.344042  761851 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1202 20:56:01.344150  761851 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 20:56:01.374696  761851 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 20:56:01.374786  761851 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1202 20:56:01.374832  761851 kubeadm.go:319] OS: Linux
	I1202 20:56:01.374904  761851 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 20:56:01.374965  761851 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 20:56:01.375027  761851 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 20:56:01.375100  761851 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 20:56:01.375165  761851 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 20:56:01.375227  761851 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 20:56:01.375295  761851 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 20:56:01.375351  761851 kubeadm.go:319] CGROUPS_IO: enabled
	I1202 20:56:01.461671  761851 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 20:56:01.461847  761851 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 20:56:01.462101  761851 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 20:56:01.473475  761851 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1202 20:56:00.519234  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:03.019288  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:56:01.478718  761851 out.go:252]   - Generating certificates and keys ...
	I1202 20:56:01.478829  761851 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 20:56:01.478911  761851 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 20:56:01.668758  761851 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 20:56:01.829895  761851 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 20:56:02.005376  761851 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 20:56:02.862909  761851 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 20:56:03.307052  761851 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 20:56:03.307703  761851 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-386191 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1202 20:56:03.383959  761851 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 20:56:03.384496  761851 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-386191 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1202 20:56:03.508307  761851 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 20:56:04.670556  761851 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 20:56:04.823930  761851 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 20:56:04.824007  761851 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	W1202 20:56:00.979309  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:56:02.980313  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:56:05.478729  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:56:05.205466  761851 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 20:56:05.375427  761851 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 20:56:05.434193  761851 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 20:56:05.863197  761851 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 20:56:06.053990  761851 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 20:56:06.054504  761851 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 20:56:06.058651  761851 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1202 20:56:05.517785  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:07.518439  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:56:06.060126  761851 out.go:252]   - Booting up control plane ...
	I1202 20:56:06.060244  761851 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 20:56:06.060364  761851 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 20:56:06.061268  761851 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 20:56:06.095037  761851 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 20:56:06.095189  761851 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 20:56:06.102515  761851 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 20:56:06.102696  761851 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 20:56:06.102769  761851 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 20:56:06.205490  761851 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 20:56:06.205715  761851 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 20:56:07.205674  761851 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001810301s
	I1202 20:56:07.209848  761851 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 20:56:07.210052  761851 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1202 20:56:07.210217  761851 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 20:56:07.210338  761851 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1202 20:56:08.756010  761851 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.546069674s
	I1202 20:56:09.869674  761851 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.659323153s
	W1202 20:56:07.979740  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:56:10.478689  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:56:11.711917  761851 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502061899s
	I1202 20:56:11.728157  761851 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 20:56:11.740906  761851 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 20:56:11.753231  761851 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 20:56:11.753530  761851 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-386191 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 20:56:11.764705  761851 kubeadm.go:319] [bootstrap-token] Using token: c8uju2.57r80hlp0isn29k2
	I1202 20:56:11.766183  761851 out.go:252]   - Configuring RBAC rules ...
	I1202 20:56:11.766339  761851 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 20:56:11.770506  761851 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 20:56:11.777525  761851 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 20:56:11.780772  761851 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 20:56:11.785459  761851 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 20:56:11.788963  761851 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 20:56:12.119080  761851 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 20:56:12.539952  761851 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 20:56:13.118875  761851 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 20:56:13.119856  761851 kubeadm.go:319] 
	I1202 20:56:13.119972  761851 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 20:56:13.119991  761851 kubeadm.go:319] 
	I1202 20:56:13.120096  761851 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 20:56:13.120106  761851 kubeadm.go:319] 
	I1202 20:56:13.120132  761851 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 20:56:13.120189  761851 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 20:56:13.120239  761851 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 20:56:13.120250  761851 kubeadm.go:319] 
	I1202 20:56:13.120296  761851 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 20:56:13.120303  761851 kubeadm.go:319] 
	I1202 20:56:13.120350  761851 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 20:56:13.120356  761851 kubeadm.go:319] 
	I1202 20:56:13.120405  761851 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 20:56:13.120480  761851 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 20:56:13.120550  761851 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 20:56:13.120559  761851 kubeadm.go:319] 
	I1202 20:56:13.120655  761851 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 20:56:13.120760  761851 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 20:56:13.120770  761851 kubeadm.go:319] 
	I1202 20:56:13.120947  761851 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token c8uju2.57r80hlp0isn29k2 \
	I1202 20:56:13.121116  761851 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:48349ffa2369b87cd15480ece56edb1c5c5d97a828930f9dfbf1d1ceccf80ff4 \
	I1202 20:56:13.121150  761851 kubeadm.go:319] 	--control-plane 
	I1202 20:56:13.121158  761851 kubeadm.go:319] 
	I1202 20:56:13.121277  761851 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 20:56:13.121292  761851 kubeadm.go:319] 
	I1202 20:56:13.121403  761851 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token c8uju2.57r80hlp0isn29k2 \
	I1202 20:56:13.121546  761851 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:48349ffa2369b87cd15480ece56edb1c5c5d97a828930f9dfbf1d1ceccf80ff4 
	I1202 20:56:13.124563  761851 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1202 20:56:13.124664  761851 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 20:56:13.124688  761851 cni.go:84] Creating CNI manager for ""
	I1202 20:56:13.124700  761851 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:56:13.126500  761851 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1202 20:56:10.017702  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:12.018270  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:56:13.128206  761851 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 20:56:13.133011  761851 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1202 20:56:13.133036  761851 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 20:56:13.147210  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 20:56:13.367880  761851 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 20:56:13.368008  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:13.368037  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-386191 minikube.k8s.io/updated_at=2025_12_02T20_56_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92 minikube.k8s.io/name=embed-certs-386191 minikube.k8s.io/primary=true
	I1202 20:56:13.378170  761851 ops.go:34] apiserver oom_adj: -16
	I1202 20:56:13.456213  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:13.956791  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:14.456911  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:14.957002  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1202 20:56:12.481885  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:56:14.478647  754876 pod_ready.go:94] pod "coredns-7d764666f9-ghxk6" is "Ready"
	I1202 20:56:14.478679  754876 pod_ready.go:86] duration metric: took 33.50633852s for pod "coredns-7d764666f9-ghxk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.481510  754876 pod_ready.go:83] waiting for pod "etcd-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.487252  754876 pod_ready.go:94] pod "etcd-no-preload-336331" is "Ready"
	I1202 20:56:14.487284  754876 pod_ready.go:86] duration metric: took 5.742661ms for pod "etcd-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.489709  754876 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.493975  754876 pod_ready.go:94] pod "kube-apiserver-no-preload-336331" is "Ready"
	I1202 20:56:14.494030  754876 pod_ready.go:86] duration metric: took 4.293005ms for pod "kube-apiserver-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.496555  754876 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.676017  754876 pod_ready.go:94] pod "kube-controller-manager-no-preload-336331" is "Ready"
	I1202 20:56:14.676054  754876 pod_ready.go:86] duration metric: took 179.468852ms for pod "kube-controller-manager-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.876507  754876 pod_ready.go:83] waiting for pod "kube-proxy-qc2v9" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:15.276156  754876 pod_ready.go:94] pod "kube-proxy-qc2v9" is "Ready"
	I1202 20:56:15.276184  754876 pod_ready.go:86] duration metric: took 399.652639ms for pod "kube-proxy-qc2v9" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:15.476929  754876 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:15.876785  754876 pod_ready.go:94] pod "kube-scheduler-no-preload-336331" is "Ready"
	I1202 20:56:15.876821  754876 pod_ready.go:86] duration metric: took 399.859554ms for pod "kube-scheduler-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:15.876837  754876 pod_ready.go:40] duration metric: took 34.909444308s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:56:15.923408  754876 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1202 20:56:15.925124  754876 out.go:179] * Done! kubectl is now configured to use "no-preload-336331" cluster and "default" namespace by default
	I1202 20:56:15.457186  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:15.957341  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:16.456356  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:16.956786  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:17.457273  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:17.529683  761851 kubeadm.go:1114] duration metric: took 4.161789754s to wait for elevateKubeSystemPrivileges
	I1202 20:56:17.529733  761851 kubeadm.go:403] duration metric: took 16.449707403s to StartCluster
	I1202 20:56:17.529758  761851 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:17.529828  761851 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:56:17.531386  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:17.531613  761851 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 20:56:17.531617  761851 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:56:17.531699  761851 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:56:17.531801  761851 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-386191"
	I1202 20:56:17.531817  761851 config.go:182] Loaded profile config "embed-certs-386191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:56:17.531839  761851 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-386191"
	I1202 20:56:17.531817  761851 addons.go:70] Setting default-storageclass=true in profile "embed-certs-386191"
	I1202 20:56:17.531877  761851 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-386191"
	I1202 20:56:17.531882  761851 host.go:66] Checking if "embed-certs-386191" exists ...
	I1202 20:56:17.532342  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:56:17.532507  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:56:17.534531  761851 out.go:179] * Verifying Kubernetes components...
	I1202 20:56:17.535950  761851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:56:17.558800  761851 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:56:17.560025  761851 addons.go:239] Setting addon default-storageclass=true in "embed-certs-386191"
	I1202 20:56:17.560084  761851 host.go:66] Checking if "embed-certs-386191" exists ...
	I1202 20:56:17.560580  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:56:17.561225  761851 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:56:17.561246  761851 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:56:17.561324  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:56:17.590711  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:56:17.592956  761851 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:56:17.592992  761851 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:56:17.593051  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:56:17.617931  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:56:17.638614  761851 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 20:56:17.681673  761851 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:56:17.712144  761851 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:56:17.735866  761851 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:56:17.815035  761851 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1202 20:56:17.816483  761851 node_ready.go:35] waiting up to 6m0s for node "embed-certs-386191" to be "Ready" ...
	I1202 20:56:18.003767  761851 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1202 20:56:14.018515  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:16.020009  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:18.517905  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:56:18.004793  761851 addons.go:530] duration metric: took 473.08842ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1202 20:56:18.319554  761851 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-386191" context rescaled to 1 replicas
	W1202 20:56:19.820111  761851 node_ready.go:57] node "embed-certs-386191" has "Ready":"False" status (will retry)
	W1202 20:56:21.019501  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:23.518373  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:22.320036  761851 node_ready.go:57] node "embed-certs-386191" has "Ready":"False" status (will retry)
	W1202 20:56:24.320559  761851 node_ready.go:57] node "embed-certs-386191" has "Ready":"False" status (will retry)
	W1202 20:56:26.018767  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:28.019223  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:26.320730  761851 node_ready.go:57] node "embed-certs-386191" has "Ready":"False" status (will retry)
	W1202 20:56:28.820145  761851 node_ready.go:57] node "embed-certs-386191" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 02 20:55:57 no-preload-336331 crio[567]: time="2025-12-02T20:55:57.719458696Z" level=info msg="Started container" PID=1739 containerID=65a98944e23b2051f5d2803b7cd4f48cd36fbd6fd8863e62252f9e57766b98ad description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4/dashboard-metrics-scraper id=92a1ecf7-01d5-4ec5-b271-a179b3c47a41 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3dab1966c7713a84ff487c510cbd7211f3f6958e62f55dd3b50773368e24fdea
	Dec 02 20:55:57 no-preload-336331 crio[567]: time="2025-12-02T20:55:57.766524766Z" level=info msg="Removing container: 2c5a874275c99b8ae5c4236310bf903c2bb613d66005d99b341d04c382954a4c" id=cf0116f9-11fe-4ee8-8815-2ca38ee9d31c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 20:55:57 no-preload-336331 crio[567]: time="2025-12-02T20:55:57.777595312Z" level=info msg="Removed container 2c5a874275c99b8ae5c4236310bf903c2bb613d66005d99b341d04c382954a4c: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4/dashboard-metrics-scraper" id=cf0116f9-11fe-4ee8-8815-2ca38ee9d31c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 20:56:10 no-preload-336331 crio[567]: time="2025-12-02T20:56:10.808732694Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=20bc2c48-0df6-4298-8d3a-54914f8033fe name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:56:10 no-preload-336331 crio[567]: time="2025-12-02T20:56:10.809749595Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3af65e13-27b0-4ceb-b127-1e547de16b43 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:56:10 no-preload-336331 crio[567]: time="2025-12-02T20:56:10.810858267Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f33d71fd-19a8-4c88-8df1-59f3d44ef85a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:56:10 no-preload-336331 crio[567]: time="2025-12-02T20:56:10.811020282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:10 no-preload-336331 crio[567]: time="2025-12-02T20:56:10.815140415Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:10 no-preload-336331 crio[567]: time="2025-12-02T20:56:10.815359705Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9bf7b5f56bdafbf738d4a385934c914be8bbfd1f7e63a6311be0fc3c81950523/merged/etc/passwd: no such file or directory"
	Dec 02 20:56:10 no-preload-336331 crio[567]: time="2025-12-02T20:56:10.815396694Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9bf7b5f56bdafbf738d4a385934c914be8bbfd1f7e63a6311be0fc3c81950523/merged/etc/group: no such file or directory"
	Dec 02 20:56:10 no-preload-336331 crio[567]: time="2025-12-02T20:56:10.815698732Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:10 no-preload-336331 crio[567]: time="2025-12-02T20:56:10.844850747Z" level=info msg="Created container 6a58ab9b9c0482bd5f103029c7c0f3bdb6d5c02e0fc49f59a43c1b17c375958e: kube-system/storage-provisioner/storage-provisioner" id=f33d71fd-19a8-4c88-8df1-59f3d44ef85a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:56:10 no-preload-336331 crio[567]: time="2025-12-02T20:56:10.845604487Z" level=info msg="Starting container: 6a58ab9b9c0482bd5f103029c7c0f3bdb6d5c02e0fc49f59a43c1b17c375958e" id=9cb202c7-21dc-4ba1-addd-39ee1b88fc88 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:56:10 no-preload-336331 crio[567]: time="2025-12-02T20:56:10.847929485Z" level=info msg="Started container" PID=1757 containerID=6a58ab9b9c0482bd5f103029c7c0f3bdb6d5c02e0fc49f59a43c1b17c375958e description=kube-system/storage-provisioner/storage-provisioner id=9cb202c7-21dc-4ba1-addd-39ee1b88fc88 name=/runtime.v1.RuntimeService/StartContainer sandboxID=247c55173c29d3744d6e9d786b5583c5b587877b11e48d4fe03ea16eeb0d052e
	Dec 02 20:56:21 no-preload-336331 crio[567]: time="2025-12-02T20:56:21.671769143Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ab3814fc-1d04-4252-9bd2-b9236e2d292e name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:56:21 no-preload-336331 crio[567]: time="2025-12-02T20:56:21.672877432Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fadb8996-012d-4100-a023-4be609bd4340 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:56:21 no-preload-336331 crio[567]: time="2025-12-02T20:56:21.673871497Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4/dashboard-metrics-scraper" id=909aa2ad-dbf6-4695-a39d-210107f4165f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:56:21 no-preload-336331 crio[567]: time="2025-12-02T20:56:21.674035377Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:21 no-preload-336331 crio[567]: time="2025-12-02T20:56:21.68161873Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:21 no-preload-336331 crio[567]: time="2025-12-02T20:56:21.682243573Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:21 no-preload-336331 crio[567]: time="2025-12-02T20:56:21.717256159Z" level=info msg="Created container a466626000385a894ca35d0cbbd705f0a7ea58df0bec6d3ee73e98444e45ee26: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4/dashboard-metrics-scraper" id=909aa2ad-dbf6-4695-a39d-210107f4165f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:56:21 no-preload-336331 crio[567]: time="2025-12-02T20:56:21.718021039Z" level=info msg="Starting container: a466626000385a894ca35d0cbbd705f0a7ea58df0bec6d3ee73e98444e45ee26" id=c34ec2cb-ed34-433b-934b-4dbeba7b4a06 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:56:21 no-preload-336331 crio[567]: time="2025-12-02T20:56:21.719868279Z" level=info msg="Started container" PID=1792 containerID=a466626000385a894ca35d0cbbd705f0a7ea58df0bec6d3ee73e98444e45ee26 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4/dashboard-metrics-scraper id=c34ec2cb-ed34-433b-934b-4dbeba7b4a06 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3dab1966c7713a84ff487c510cbd7211f3f6958e62f55dd3b50773368e24fdea
	Dec 02 20:56:21 no-preload-336331 crio[567]: time="2025-12-02T20:56:21.841901993Z" level=info msg="Removing container: 65a98944e23b2051f5d2803b7cd4f48cd36fbd6fd8863e62252f9e57766b98ad" id=54aa0970-a3b9-4d90-bebf-2ebb18b43307 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 20:56:21 no-preload-336331 crio[567]: time="2025-12-02T20:56:21.852500794Z" level=info msg="Removed container 65a98944e23b2051f5d2803b7cd4f48cd36fbd6fd8863e62252f9e57766b98ad: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4/dashboard-metrics-scraper" id=54aa0970-a3b9-4d90-bebf-2ebb18b43307 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a466626000385       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   3                   3dab1966c7713       dashboard-metrics-scraper-867fb5f87b-nh2q4   kubernetes-dashboard
	6a58ab9b9c048       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   247c55173c29d       storage-provisioner                          kube-system
	392425906cb89       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   8b3d9c6e13687       kubernetes-dashboard-b84665fb8-njbfb         kubernetes-dashboard
	88167c0c54572       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           51 seconds ago      Running             coredns                     0                   f2acf59363960       coredns-7d764666f9-ghxk6                     kube-system
	14998959c7ac1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   bd8482640477a       busybox                                      default
	a8e5580b374ec       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   698949487dc26       kindnet-5blk7                                kube-system
	80e9078d18ca5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   247c55173c29d       storage-provisioner                          kube-system
	4ba2da46b3cf6       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           51 seconds ago      Running             kube-proxy                  0                   e141c1deb2aa7       kube-proxy-qc2v9                             kube-system
	fe483c8206ed4       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           52 seconds ago      Running             etcd                        0                   9d5f6bc769821       etcd-no-preload-336331                       kube-system
	8a39789ad0781       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           52 seconds ago      Running             kube-controller-manager     0                   1a180f6e9da0d       kube-controller-manager-no-preload-336331    kube-system
	cec9f1979d354       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           52 seconds ago      Running             kube-scheduler              0                   d70011c809154       kube-scheduler-no-preload-336331             kube-system
	9d960cc48cf5c       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           53 seconds ago      Running             kube-apiserver              0                   141b2cc839d51       kube-apiserver-no-preload-336331             kube-system
	
	
	==> coredns [88167c0c5457270abf23d0e9c8ba2c26bd39e3fd35acbcc5be0ec9337db24e9e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:55185 - 37920 "HINFO IN 4806991979558089534.5762877512650251915. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027381649s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-336331
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-336331
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=no-preload-336331
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T20_54_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 20:54:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-336331
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:56:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:56:10 +0000   Tue, 02 Dec 2025 20:54:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:56:10 +0000   Tue, 02 Dec 2025 20:54:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:56:10 +0000   Tue, 02 Dec 2025 20:54:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:56:10 +0000   Tue, 02 Dec 2025 20:54:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-336331
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                3a1272e4-255b-4719-83a7-b5faa7d71457
	  Boot ID:                    9dd5456f-e394-4b4b-9458-48d26faf507e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-7d764666f9-ghxk6                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-no-preload-336331                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-5blk7                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-no-preload-336331              250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-no-preload-336331     200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-qc2v9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-no-preload-336331              100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-nh2q4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-njbfb          0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  108s  node-controller  Node no-preload-336331 event: Registered Node no-preload-336331 in Controller
	  Normal  RegisteredNode  49s   node-controller  Node no-preload-336331 event: Registered Node no-preload-336331 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[ +32.254238] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[Dec 2 20:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 03 bd 14 45 8a 08 06
	[  +0.000590] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 96 27 ad 0d 40 04 08 06
	[Dec 2 20:53] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	[  +0.000700] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 e4 ba c0 78 5f 08 06
	[ +10.119645] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000022] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[  +2.447166] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 df 09 53 d6 6e 08 06
	[  +0.000374] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 8d 06 71 0a 5e 08 06
	[Dec 2 20:54] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 12 47 13 50 f6 bc 08 06
	[  +0.001523] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[ +22.123549] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 0d 45 06 42 2a 08 06
	[  +0.000389] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	
	
	==> etcd [fe483c8206ed4feb9f82c31650dd1c179edfd56fdbd85b46b0866b331f6ea99d] <==
	{"level":"warn","ts":"2025-12-02T20:55:38.923390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:38.937708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:38.946997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:38.956714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:38.964668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:38.975032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:38.983518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:38.992147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.001116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.010267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.019092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.026508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.034719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.043158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.051131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.059607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.067421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.074466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.081587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.088808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.103014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.109590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.116342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.175169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41000","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T20:55:45.965454Z","caller":"traceutil/trace.go:172","msg":"trace[1181498233] transaction","detail":"{read_only:false; response_revision:549; number_of_response:1; }","duration":"106.052706ms","start":"2025-12-02T20:55:45.859374Z","end":"2025-12-02T20:55:45.965427Z","steps":["trace[1181498233] 'process raft request'  (duration: 71.783873ms)","trace[1181498233] 'compare'  (duration: 34.139386ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:56:31 up  2:38,  0 user,  load average: 4.22, 4.10, 2.72
	Linux no-preload-336331 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a8e5580b374ec483ae25a01dd060fd7f0ac21c7f4a3afc6999fc62d8ede79880] <==
	I1202 20:55:40.373795       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 20:55:40.374118       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1202 20:55:40.374314       1 main.go:148] setting mtu 1500 for CNI 
	I1202 20:55:40.374334       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 20:55:40.374360       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T20:55:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 20:55:40.644139       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 20:55:40.644781       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 20:55:40.644825       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 20:55:40.645080       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 20:55:41.071023       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 20:55:41.071060       1 metrics.go:72] Registering metrics
	I1202 20:55:41.071138       1 controller.go:711] "Syncing nftables rules"
	I1202 20:55:50.644744       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1202 20:55:50.644860       1 main.go:301] handling current node
	I1202 20:56:00.645162       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1202 20:56:00.645210       1 main.go:301] handling current node
	I1202 20:56:10.644395       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1202 20:56:10.644434       1 main.go:301] handling current node
	I1202 20:56:20.644132       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1202 20:56:20.644189       1 main.go:301] handling current node
	I1202 20:56:30.651139       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1202 20:56:30.651178       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9d960cc48cf5c1a7210c34cfa4e205107d9dd729104ed2798e71e12ba001d7ec] <==
	I1202 20:55:39.683936       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1202 20:55:39.683947       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1202 20:55:39.683922       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1202 20:55:39.686139       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 20:55:39.683975       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1202 20:55:39.684611       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1202 20:55:39.685950       1 aggregator.go:187] initial CRD sync complete...
	I1202 20:55:39.686483       1 autoregister_controller.go:144] Starting autoregister controller
	I1202 20:55:39.686501       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 20:55:39.686519       1 cache.go:39] Caches are synced for autoregister controller
	I1202 20:55:39.701144       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1202 20:55:39.707553       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:39.707589       1 policy_source.go:248] refreshing policies
	I1202 20:55:39.714286       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 20:55:39.815541       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 20:55:40.128902       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 20:55:40.180593       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 20:55:40.217425       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 20:55:40.230825       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 20:55:40.302713       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.167.208"}
	I1202 20:55:40.322954       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.127.61"}
	I1202 20:55:40.577156       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1202 20:55:43.316173       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 20:55:43.412589       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 20:55:43.462535       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8a39789ad0781128fb83397c05c270ff26c09bd32ec5d4c90b8ca4d3a01533cd] <==
	I1202 20:55:42.816619       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.815959       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.816948       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.816362       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.817172       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.816363       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.817311       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1202 20:55:42.817404       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-336331"
	I1202 20:55:42.818318       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1202 20:55:42.816381       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.818468       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.817667       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.817527       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.817586       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.817737       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.818157       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.818585       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.819730       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.819763       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.823584       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 20:55:42.823963       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.917906       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.917935       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1202 20:55:42.917942       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1202 20:55:42.924759       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [4ba2da46b3cf6cbb497d0561308e1e2541b679b2fee63afd57d239b2a5487d39] <==
	I1202 20:55:40.128839       1 server_linux.go:53] "Using iptables proxy"
	I1202 20:55:40.209286       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 20:55:40.310402       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:40.310454       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1202 20:55:40.310569       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 20:55:40.338607       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 20:55:40.338676       1 server_linux.go:136] "Using iptables Proxier"
	I1202 20:55:40.345799       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 20:55:40.346346       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 20:55:40.346370       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:55:40.348047       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 20:55:40.348218       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 20:55:40.348325       1 config.go:106] "Starting endpoint slice config controller"
	I1202 20:55:40.348481       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 20:55:40.351508       1 config.go:309] "Starting node config controller"
	I1202 20:55:40.351634       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 20:55:40.351657       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 20:55:40.347900       1 config.go:200] "Starting service config controller"
	I1202 20:55:40.352448       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 20:55:40.452379       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 20:55:40.452392       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 20:55:40.452532       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [cec9f1979d354143b12bba5938c36bf941dd1a2a9c5096761b95b27d36bc9e59] <==
	I1202 20:55:38.474086       1 serving.go:386] Generated self-signed cert in-memory
	W1202 20:55:39.600733       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 20:55:39.600768       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1202 20:55:39.600780       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 20:55:39.600789       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 20:55:39.640554       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1202 20:55:39.641132       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:55:39.645801       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:55:39.645907       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 20:55:39.646531       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 20:55:39.646820       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 20:55:39.746385       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 02 20:55:51 no-preload-336331 kubelet[716]: E1202 20:55:51.747806     716 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-336331" containerName="kube-scheduler"
	Dec 02 20:55:52 no-preload-336331 kubelet[716]: E1202 20:55:52.749419     716 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-njbfb" containerName="kubernetes-dashboard"
	Dec 02 20:55:55 no-preload-336331 kubelet[716]: E1202 20:55:55.584351     716 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-336331" containerName="kube-controller-manager"
	Dec 02 20:55:55 no-preload-336331 kubelet[716]: I1202 20:55:55.632120     716 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-njbfb" podStartSLOduration=5.369292721 podStartE2EDuration="12.632096499s" podCreationTimestamp="2025-12-02 20:55:43 +0000 UTC" firstStartedPulling="2025-12-02 20:55:43.730566146 +0000 UTC m=+6.173422791" lastFinishedPulling="2025-12-02 20:55:50.99336993 +0000 UTC m=+13.436226569" observedRunningTime="2025-12-02 20:55:51.765084253 +0000 UTC m=+14.207940905" watchObservedRunningTime="2025-12-02 20:55:55.632096499 +0000 UTC m=+18.074953155"
	Dec 02 20:55:57 no-preload-336331 kubelet[716]: E1202 20:55:57.670639     716 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4" containerName="dashboard-metrics-scraper"
	Dec 02 20:55:57 no-preload-336331 kubelet[716]: I1202 20:55:57.670678     716 scope.go:122] "RemoveContainer" containerID="2c5a874275c99b8ae5c4236310bf903c2bb613d66005d99b341d04c382954a4c"
	Dec 02 20:55:57 no-preload-336331 kubelet[716]: I1202 20:55:57.764964     716 scope.go:122] "RemoveContainer" containerID="2c5a874275c99b8ae5c4236310bf903c2bb613d66005d99b341d04c382954a4c"
	Dec 02 20:55:57 no-preload-336331 kubelet[716]: E1202 20:55:57.765292     716 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4" containerName="dashboard-metrics-scraper"
	Dec 02 20:55:57 no-preload-336331 kubelet[716]: I1202 20:55:57.765331     716 scope.go:122] "RemoveContainer" containerID="65a98944e23b2051f5d2803b7cd4f48cd36fbd6fd8863e62252f9e57766b98ad"
	Dec 02 20:55:57 no-preload-336331 kubelet[716]: E1202 20:55:57.765529     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nh2q4_kubernetes-dashboard(3114ee57-4f0d-415c-8ca7-2fdbe67e1e5c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4" podUID="3114ee57-4f0d-415c-8ca7-2fdbe67e1e5c"
	Dec 02 20:56:00 no-preload-336331 kubelet[716]: E1202 20:56:00.895700     716 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4" containerName="dashboard-metrics-scraper"
	Dec 02 20:56:00 no-preload-336331 kubelet[716]: I1202 20:56:00.895752     716 scope.go:122] "RemoveContainer" containerID="65a98944e23b2051f5d2803b7cd4f48cd36fbd6fd8863e62252f9e57766b98ad"
	Dec 02 20:56:00 no-preload-336331 kubelet[716]: E1202 20:56:00.895979     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nh2q4_kubernetes-dashboard(3114ee57-4f0d-415c-8ca7-2fdbe67e1e5c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4" podUID="3114ee57-4f0d-415c-8ca7-2fdbe67e1e5c"
	Dec 02 20:56:10 no-preload-336331 kubelet[716]: I1202 20:56:10.808218     716 scope.go:122] "RemoveContainer" containerID="80e9078d18ca5da464c12fd5d5b48d960e2c5867e07585bee911e523c9a0630a"
	Dec 02 20:56:13 no-preload-336331 kubelet[716]: E1202 20:56:13.965399     716 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ghxk6" containerName="coredns"
	Dec 02 20:56:21 no-preload-336331 kubelet[716]: E1202 20:56:21.671150     716 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4" containerName="dashboard-metrics-scraper"
	Dec 02 20:56:21 no-preload-336331 kubelet[716]: I1202 20:56:21.671211     716 scope.go:122] "RemoveContainer" containerID="65a98944e23b2051f5d2803b7cd4f48cd36fbd6fd8863e62252f9e57766b98ad"
	Dec 02 20:56:21 no-preload-336331 kubelet[716]: I1202 20:56:21.840525     716 scope.go:122] "RemoveContainer" containerID="65a98944e23b2051f5d2803b7cd4f48cd36fbd6fd8863e62252f9e57766b98ad"
	Dec 02 20:56:21 no-preload-336331 kubelet[716]: E1202 20:56:21.840807     716 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4" containerName="dashboard-metrics-scraper"
	Dec 02 20:56:21 no-preload-336331 kubelet[716]: I1202 20:56:21.840847     716 scope.go:122] "RemoveContainer" containerID="a466626000385a894ca35d0cbbd705f0a7ea58df0bec6d3ee73e98444e45ee26"
	Dec 02 20:56:21 no-preload-336331 kubelet[716]: E1202 20:56:21.841061     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nh2q4_kubernetes-dashboard(3114ee57-4f0d-415c-8ca7-2fdbe67e1e5c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4" podUID="3114ee57-4f0d-415c-8ca7-2fdbe67e1e5c"
	Dec 02 20:56:28 no-preload-336331 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 20:56:28 no-preload-336331 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 20:56:28 no-preload-336331 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 20:56:28 no-preload-336331 systemd[1]: kubelet.service: Consumed 1.906s CPU time.
	
	
	==> kubernetes-dashboard [392425906cb8929a82cf3b6a301d75ecc8a3f2afb4aca218a52e369092d206a5] <==
	2025/12/02 20:55:51 Using namespace: kubernetes-dashboard
	2025/12/02 20:55:51 Using in-cluster config to connect to apiserver
	2025/12/02 20:55:51 Using secret token for csrf signing
	2025/12/02 20:55:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/02 20:55:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/02 20:55:51 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/02 20:55:51 Generating JWE encryption key
	2025/12/02 20:55:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/02 20:55:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/02 20:55:51 Initializing JWE encryption key from synchronized object
	2025/12/02 20:55:51 Creating in-cluster Sidecar client
	2025/12/02 20:55:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 20:55:51 Serving insecurely on HTTP port: 9090
	2025/12/02 20:56:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 20:55:51 Starting overwatch
	
	
	==> storage-provisioner [6a58ab9b9c0482bd5f103029c7c0f3bdb6d5c02e0fc49f59a43c1b17c375958e] <==
	I1202 20:56:10.861553       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 20:56:10.870848       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 20:56:10.870913       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1202 20:56:10.873854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:14.329161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:18.590289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:22.189346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:25.243477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:28.266308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:28.271132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 20:56:28.271288       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 20:56:28.271455       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3ea12a83-8249-476a-aff4-76a34b961543", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-336331_2dd48adb-dd9f-44e7-b07a-258cd92825d9 became leader
	I1202 20:56:28.271492       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-336331_2dd48adb-dd9f-44e7-b07a-258cd92825d9!
	W1202 20:56:28.273784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:28.278118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 20:56:28.371767       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-336331_2dd48adb-dd9f-44e7-b07a-258cd92825d9!
	W1202 20:56:30.281622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:30.287541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [80e9078d18ca5da464c12fd5d5b48d960e2c5867e07585bee911e523c9a0630a] <==
	I1202 20:55:40.115598       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1202 20:56:10.118971       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-336331 -n no-preload-336331
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-336331 -n no-preload-336331: exit status 2 (369.92104ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-336331 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-336331
helpers_test.go:243: (dbg) docker inspect no-preload-336331:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5c0b97280754bca09a214a1209794eaed88bdb20761bc875787e6ff1daeba56e",
	        "Created": "2025-12-02T20:54:14.239653127Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 755084,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T20:55:31.061844503Z",
	            "FinishedAt": "2025-12-02T20:55:30.136527058Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/5c0b97280754bca09a214a1209794eaed88bdb20761bc875787e6ff1daeba56e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5c0b97280754bca09a214a1209794eaed88bdb20761bc875787e6ff1daeba56e/hostname",
	        "HostsPath": "/var/lib/docker/containers/5c0b97280754bca09a214a1209794eaed88bdb20761bc875787e6ff1daeba56e/hosts",
	        "LogPath": "/var/lib/docker/containers/5c0b97280754bca09a214a1209794eaed88bdb20761bc875787e6ff1daeba56e/5c0b97280754bca09a214a1209794eaed88bdb20761bc875787e6ff1daeba56e-json.log",
	        "Name": "/no-preload-336331",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-336331:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-336331",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5c0b97280754bca09a214a1209794eaed88bdb20761bc875787e6ff1daeba56e",
	                "LowerDir": "/var/lib/docker/overlay2/594362d957f037a0e8c8f90d32655c29146773c96403d1c3d09c40858d94140a-init/diff:/var/lib/docker/overlay2/49bb44e987885c48600d5ae7ad3c81cad82a00f38070c0460882c5746d9fae59/diff",
	                "MergedDir": "/var/lib/docker/overlay2/594362d957f037a0e8c8f90d32655c29146773c96403d1c3d09c40858d94140a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/594362d957f037a0e8c8f90d32655c29146773c96403d1c3d09c40858d94140a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/594362d957f037a0e8c8f90d32655c29146773c96403d1c3d09c40858d94140a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-336331",
	                "Source": "/var/lib/docker/volumes/no-preload-336331/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-336331",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-336331",
	                "name.minikube.sigs.k8s.io": "no-preload-336331",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "65a2e75519167fbc561bca923d1439e164447b61d1dfde6b46aaf79c359426ed",
	            "SandboxKey": "/var/run/docker/netns/65a2e7551916",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33503"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33504"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33507"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33505"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33506"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-336331": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be4fb772701bc21d00b8604cf864a912ac52112a68f7d1c80495359c23362a1c",
	                    "EndpointID": "3965c027c19be89aa4713f677d35eccaa82185930131aa0c66d5390c5517e84f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "06:8c:35:25:35:4c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-336331",
	                        "5c0b97280754"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-336331 -n no-preload-336331
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-336331 -n no-preload-336331: exit status 2 (344.238336ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-336331 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-336331 logs -n 25: (1.209152136s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p newest-cni-245604 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:54 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable metrics-server -p no-preload-336331 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ stop    │ -p no-preload-336331 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable metrics-server -p newest-cni-245604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-997805 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ stop    │ -p newest-cni-245604 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ stop    │ -p default-k8s-diff-port-997805 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable dashboard -p newest-cni-245604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p newest-cni-245604 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable dashboard -p no-preload-336331 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p no-preload-336331 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:56 UTC │
	│ image   │ newest-cni-245604 image list --format=json                                                                                                                                                                                                           │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ pause   │ -p newest-cni-245604 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-997805 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p default-k8s-diff-port-997805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ delete  │ -p newest-cni-245604                                                                                                                                                                                                                                 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ delete  │ -p newest-cni-245604                                                                                                                                                                                                                                 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ delete  │ -p disable-driver-mounts-234978                                                                                                                                                                                                                      │ disable-driver-mounts-234978 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p embed-certs-386191 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ image   │ old-k8s-version-992336 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ pause   │ -p old-k8s-version-992336 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ delete  │ -p old-k8s-version-992336                                                                                                                                                                                                                            │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:56 UTC │
	│ delete  │ -p old-k8s-version-992336                                                                                                                                                                                                                            │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ image   │ no-preload-336331 image list --format=json                                                                                                                                                                                                           │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ pause   │ -p no-preload-336331 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:55:49
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:55:49.973376  761851 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:55:49.973479  761851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:55:49.973486  761851 out.go:374] Setting ErrFile to fd 2...
	I1202 20:55:49.973492  761851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:55:49.973784  761851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:55:49.974402  761851 out.go:368] Setting JSON to false
	I1202 20:55:49.976053  761851 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9494,"bootTime":1764699456,"procs":379,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:55:49.976153  761851 start.go:143] virtualization: kvm guest
	I1202 20:55:49.979903  761851 out.go:179] * [embed-certs-386191] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 20:55:49.981563  761851 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:55:49.981711  761851 notify.go:221] Checking for updates...
	I1202 20:55:49.985961  761851 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:55:49.989444  761851 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:49.990856  761851 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 20:55:49.992198  761851 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:55:49.994165  761851 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:55:49.996734  761851 config.go:182] Loaded profile config "default-k8s-diff-port-997805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:55:49.996944  761851 config.go:182] Loaded profile config "no-preload-336331": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:55:49.997173  761851 config.go:182] Loaded profile config "old-k8s-version-992336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1202 20:55:49.997373  761851 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:55:50.033364  761851 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 20:55:50.033467  761851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:55:50.114622  761851 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 20:55:50.101227741 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:55:50.114779  761851 docker.go:319] overlay module found
	I1202 20:55:50.117537  761851 out.go:179] * Using the docker driver based on user configuration
	I1202 20:55:50.119145  761851 start.go:309] selected driver: docker
	I1202 20:55:50.119167  761851 start.go:927] validating driver "docker" against <nil>
	I1202 20:55:50.119183  761851 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:55:50.120035  761851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:55:50.211212  761851 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 20:55:50.198488456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:55:50.211445  761851 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 20:55:50.211790  761851 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:55:50.214433  761851 out.go:179] * Using Docker driver with root privileges
	I1202 20:55:50.218243  761851 cni.go:84] Creating CNI manager for ""
	I1202 20:55:50.218353  761851 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:50.218375  761851 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 20:55:50.218508  761851 start.go:353] cluster config:
	{Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:55:50.220045  761851 out.go:179] * Starting "embed-certs-386191" primary control-plane node in "embed-certs-386191" cluster
	I1202 20:55:50.221707  761851 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 20:55:50.223105  761851 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 20:55:50.224334  761851 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:50.224383  761851 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 20:55:50.224379  761851 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 20:55:50.224423  761851 cache.go:65] Caching tarball of preloaded images
	I1202 20:55:50.224531  761851 preload.go:238] Found /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 20:55:50.224544  761851 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 20:55:50.224682  761851 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/config.json ...
	I1202 20:55:50.224706  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/config.json: {Name:mk4df57c1427e88de36c6d265cf4b7b9447ba4a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:50.254982  761851 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 20:55:50.255008  761851 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 20:55:50.255030  761851 cache.go:243] Successfully downloaded all kic artifacts
	I1202 20:55:50.255092  761851 start.go:360] acquireMachinesLock for embed-certs-386191: {Name:mk07b451c8d7193712ed79603183bf03b141f2ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:55:50.255209  761851 start.go:364] duration metric: took 90.207µs to acquireMachinesLock for "embed-certs-386191"
	I1202 20:55:50.255244  761851 start.go:93] Provisioning new machine with config: &{Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:55:50.255372  761851 start.go:125] createHost starting for "" (driver="docker")
	W1202 20:55:47.478474  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:55:49.480219  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:55:48.658867  759377 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:55:48.658893  759377 machine.go:97] duration metric: took 4.363922202s to provisionDockerMachine
	I1202 20:55:48.658908  759377 start.go:293] postStartSetup for "default-k8s-diff-port-997805" (driver="docker")
	I1202 20:55:48.659934  759377 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:55:48.660266  759377 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:55:48.660319  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:48.684270  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:48.800470  759377 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:55:48.806594  759377 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:55:48.806641  759377 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:55:48.806659  759377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 20:55:48.806723  759377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 20:55:48.806832  759377 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem -> 4110322.pem in /etc/ssl/certs
	I1202 20:55:48.807095  759377 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:55:48.817526  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:55:48.843728  759377 start.go:296] duration metric: took 183.799228ms for postStartSetup
	I1202 20:55:48.843844  759377 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:55:48.843886  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:48.867562  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:48.976679  759377 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:55:48.983737  759377 fix.go:56] duration metric: took 5.130755935s for fixHost
	I1202 20:55:48.983779  759377 start.go:83] releasing machines lock for "default-k8s-diff-port-997805", held for 5.130814844s
	I1202 20:55:48.983853  759377 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-997805
	I1202 20:55:49.008951  759377 ssh_runner.go:195] Run: cat /version.json
	I1202 20:55:49.009046  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:49.009048  759377 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:55:49.009136  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:49.034693  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:49.035313  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:49.217584  759377 ssh_runner.go:195] Run: systemctl --version
	I1202 20:55:49.226948  759377 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:55:49.280525  759377 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:55:49.287579  759377 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:55:49.287663  759377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:55:49.299593  759377 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 20:55:49.299624  759377 start.go:496] detecting cgroup driver to use...
	I1202 20:55:49.299667  759377 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 20:55:49.299717  759377 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:55:49.321346  759377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:55:49.340202  759377 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:55:49.340276  759377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:55:49.364580  759377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:55:49.384570  759377 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:55:49.507838  759377 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:55:49.636982  759377 docker.go:234] disabling docker service ...
	I1202 20:55:49.637124  759377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:55:49.660429  759377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:55:49.676580  759377 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:55:49.805919  759377 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:55:49.932552  759377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:55:49.950808  759377 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:55:49.973269  759377 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:55:49.973378  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:49.987382  759377 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 20:55:49.987446  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.001518  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.015622  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.029383  759377 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:55:50.042396  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.055622  759377 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.069706  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.082027  759377 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:55:50.093878  759377 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:55:50.106172  759377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:50.241651  759377 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:55:51.093615  759377 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:55:51.093712  759377 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:55:51.098803  759377 start.go:564] Will wait 60s for crictl version
	I1202 20:55:51.098893  759377 ssh_runner.go:195] Run: which crictl
	I1202 20:55:51.103616  759377 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:55:51.134275  759377 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:55:51.134365  759377 ssh_runner.go:195] Run: crio --version
	I1202 20:55:51.176508  759377 ssh_runner.go:195] Run: crio --version
	I1202 20:55:51.212619  759377 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 20:55:51.213954  759377 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-997805 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:55:51.239456  759377 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 20:55:51.247008  759377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:51.258836  759377 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-997805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-997805 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:55:51.259035  759377 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:51.259113  759377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:51.305184  759377 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:51.305211  759377 crio.go:433] Images already preloaded, skipping extraction
	I1202 20:55:51.305279  759377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:51.336679  759377 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:51.336721  759377 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:55:51.336736  759377 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1202 20:55:51.336850  759377 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-997805 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-997805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:55:51.336915  759377 ssh_runner.go:195] Run: crio config
	I1202 20:55:51.395485  759377 cni.go:84] Creating CNI manager for ""
	I1202 20:55:51.395526  759377 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:51.395553  759377 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:55:51.395590  759377 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-997805 NodeName:default-k8s-diff-port-997805 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:55:51.395786  759377 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-997805"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:55:51.395870  759377 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 20:55:51.406735  759377 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:55:51.406822  759377 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:55:51.416228  759377 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1202 20:55:51.430748  759377 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:55:51.448244  759377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1202 20:55:51.463482  759377 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 20:55:51.467906  759377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:51.480393  759377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:51.588830  759377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:55:51.618253  759377 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805 for IP: 192.168.85.2
	I1202 20:55:51.618282  759377 certs.go:195] generating shared ca certs ...
	I1202 20:55:51.618303  759377 certs.go:227] acquiring lock for ca certs: {Name:mkea11924193dd4dc459e16d70207bde383196df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:51.618470  759377 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key
	I1202 20:55:51.618534  759377 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key
	I1202 20:55:51.618547  759377 certs.go:257] generating profile certs ...
	I1202 20:55:51.618661  759377 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/client.key
	I1202 20:55:51.618759  759377 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/apiserver.key.36ffc693
	I1202 20:55:51.618817  759377 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/proxy-client.key
	I1202 20:55:51.618958  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem (1338 bytes)
	W1202 20:55:51.619000  759377 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032_empty.pem, impossibly tiny 0 bytes
	I1202 20:55:51.619010  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 20:55:51.619043  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:55:51.619087  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:55:51.619120  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem (1675 bytes)
	I1202 20:55:51.619173  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:55:51.619958  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:55:51.642775  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:55:51.668086  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:55:51.695111  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:55:51.723055  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 20:55:51.757108  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 20:55:51.782582  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:55:51.803028  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 20:55:51.823897  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:55:51.845621  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem --> /usr/share/ca-certificates/411032.pem (1338 bytes)
	I1202 20:55:51.866855  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /usr/share/ca-certificates/4110322.pem (1708 bytes)
	I1202 20:55:51.890515  759377 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:55:51.906355  759377 ssh_runner.go:195] Run: openssl version
	I1202 20:55:51.914259  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110322.pem && ln -fs /usr/share/ca-certificates/4110322.pem /etc/ssl/certs/4110322.pem"
	I1202 20:55:51.925148  759377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110322.pem
	I1202 20:55:51.929800  759377 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 20:13 /usr/share/ca-certificates/4110322.pem
	I1202 20:55:51.929869  759377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110322.pem
	I1202 20:55:51.972279  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4110322.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:55:51.983418  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:55:51.993784  759377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:51.999249  759377 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:51.999316  759377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:52.049373  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:55:52.061515  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/411032.pem && ln -fs /usr/share/ca-certificates/411032.pem /etc/ssl/certs/411032.pem"
	I1202 20:55:52.072126  759377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/411032.pem
	I1202 20:55:52.076862  759377 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 20:13 /usr/share/ca-certificates/411032.pem
	I1202 20:55:52.076956  759377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/411032.pem
	I1202 20:55:52.126642  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/411032.pem /etc/ssl/certs/51391683.0"
	I1202 20:55:52.138458  759377 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:55:52.143543  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 20:55:52.198225  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 20:55:52.254754  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 20:55:52.319722  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 20:55:52.380903  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 20:55:52.422910  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 20:55:52.483325  759377 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-997805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-997805 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:55:52.483438  759377 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:55:52.483499  759377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:55:52.522620  759377 cri.go:89] found id: "25e14e8feafb6c0d6c5261cd5e507b812e39fcb9c7e196408fe69d780ebbcd1d"
	I1202 20:55:52.522651  759377 cri.go:89] found id: "0c7e2844e2dbdbf5b9ffe8bf4e8d07304b64b059e3d4c965c2010c5d8a39c499"
	I1202 20:55:52.522657  759377 cri.go:89] found id: "81b0ec87511a05a7501d98eb27c52f69372a4b30c4ea523db262c140f9b68cd3"
	I1202 20:55:52.522662  759377 cri.go:89] found id: "e13e6c4d6c5da602ac2e1402a7612205c5a0ceffdccf7618da3035e562a7d9d3"
	I1202 20:55:52.522667  759377 cri.go:89] found id: ""
	I1202 20:55:52.522718  759377 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 20:55:52.539274  759377 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:55:52Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:55:52.539358  759377 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:55:52.550759  759377 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 20:55:52.550911  759377 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 20:55:52.550977  759377 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 20:55:52.562444  759377 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:55:52.563380  759377 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-997805" does not appear in /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:52.563867  759377 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-407427/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-997805" cluster setting kubeconfig missing "default-k8s-diff-port-997805" context setting]
	I1202 20:55:52.564708  759377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:52.567122  759377 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 20:55:52.580423  759377 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1202 20:55:52.580475  759377 kubeadm.go:602] duration metric: took 29.545337ms to restartPrimaryControlPlane
	I1202 20:55:52.580492  759377 kubeadm.go:403] duration metric: took 97.179033ms to StartCluster
	I1202 20:55:52.580515  759377 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:52.580624  759377 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:52.582395  759377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:52.582737  759377 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:55:52.582982  759377 config.go:182] Loaded profile config "default-k8s-diff-port-997805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:55:52.583044  759377 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:55:52.583145  759377 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:52.583167  759377 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-997805"
	W1202 20:55:52.583180  759377 addons.go:248] addon storage-provisioner should already be in state true
	I1202 20:55:52.583208  759377 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:52.583706  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.583924  759377 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:52.583949  759377 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-997805"
	W1202 20:55:52.583958  759377 addons.go:248] addon dashboard should already be in state true
	I1202 20:55:52.583987  759377 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:52.584470  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.584621  759377 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:52.584638  759377 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-997805"
	I1202 20:55:52.584909  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.590138  759377 out.go:179] * Verifying Kubernetes components...
	I1202 20:55:52.591985  759377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:52.621520  759377 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-997805"
	W1202 20:55:52.621550  759377 addons.go:248] addon default-storageclass should already be in state true
	I1202 20:55:52.621581  759377 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:52.621962  759377 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 20:55:52.621973  759377 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:55:52.622100  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.623522  759377 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:52.623542  759377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:55:52.623861  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:52.629794  759377 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 20:55:52.631326  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 20:55:52.631354  759377 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 20:55:52.631441  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:52.650454  759377 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:52.650440  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:52.650477  759377 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:55:52.650539  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:52.664697  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:52.687593  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:52.782783  759377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:55:52.788136  759377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:52.796186  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 20:55:52.796227  759377 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 20:55:52.805245  759377 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-997805" to be "Ready" ...
	I1202 20:55:52.813493  759377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:52.816061  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 20:55:52.816120  759377 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 20:55:52.836609  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 20:55:52.836641  759377 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 20:55:52.858664  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 20:55:52.858695  759377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 20:55:52.881817  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 20:55:52.881850  759377 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 20:55:52.898249  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 20:55:52.898282  759377 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 20:55:52.916317  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 20:55:52.916341  759377 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 20:55:52.934311  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 20:55:52.934421  759377 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 20:55:52.954130  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:55:52.954156  759377 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 20:55:52.971994  759377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:55:50.259730  761851 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1202 20:55:50.260957  761851 start.go:159] libmachine.API.Create for "embed-certs-386191" (driver="docker")
	I1202 20:55:50.261018  761851 client.go:173] LocalClient.Create starting
	I1202 20:55:50.261131  761851 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem
	I1202 20:55:50.261175  761851 main.go:143] libmachine: Decoding PEM data...
	I1202 20:55:50.261199  761851 main.go:143] libmachine: Parsing certificate...
	I1202 20:55:50.261293  761851 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem
	I1202 20:55:50.261321  761851 main.go:143] libmachine: Decoding PEM data...
	I1202 20:55:50.261336  761851 main.go:143] libmachine: Parsing certificate...
	I1202 20:55:50.261828  761851 cli_runner.go:164] Run: docker network inspect embed-certs-386191 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 20:55:50.287353  761851 cli_runner.go:211] docker network inspect embed-certs-386191 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 20:55:50.287436  761851 network_create.go:284] running [docker network inspect embed-certs-386191] to gather additional debugging logs...
	I1202 20:55:50.287467  761851 cli_runner.go:164] Run: docker network inspect embed-certs-386191
	W1202 20:55:50.313420  761851 cli_runner.go:211] docker network inspect embed-certs-386191 returned with exit code 1
	I1202 20:55:50.313458  761851 network_create.go:287] error running [docker network inspect embed-certs-386191]: docker network inspect embed-certs-386191: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-386191 not found
	I1202 20:55:50.313493  761851 network_create.go:289] output of [docker network inspect embed-certs-386191]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-386191 not found
	
	** /stderr **
	I1202 20:55:50.313695  761851 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:55:50.339597  761851 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-acf081edf266 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:04:c0:60:47:62} reservation:<nil>}
	I1202 20:55:50.340759  761851 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9623a21fb225 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:fc:8b:40:15:1b} reservation:<nil>}
	I1202 20:55:50.341559  761851 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f2b79e7e26a5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:c7:f4:38:1c:32} reservation:<nil>}
	I1202 20:55:50.342581  761851 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-be4fb772701b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:87:5f:38:96:b7} reservation:<nil>}
	I1202 20:55:50.343861  761851 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-13fe483902b9 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:a2:a4:21:b2:62:5a} reservation:<nil>}
	I1202 20:55:50.344785  761851 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-65ab470fa0e2 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:16:23:28:7c:c5:24} reservation:<nil>}
	I1202 20:55:50.346012  761851 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fb66a0}
	I1202 20:55:50.346044  761851 network_create.go:124] attempt to create docker network embed-certs-386191 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1202 20:55:50.346142  761851 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-386191 embed-certs-386191
	I1202 20:55:50.449757  761851 network_create.go:108] docker network embed-certs-386191 192.168.103.0/24 created
	I1202 20:55:50.449812  761851 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-386191" container
	I1202 20:55:50.449912  761851 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 20:55:50.476319  761851 cli_runner.go:164] Run: docker volume create embed-certs-386191 --label name.minikube.sigs.k8s.io=embed-certs-386191 --label created_by.minikube.sigs.k8s.io=true
	I1202 20:55:50.544287  761851 oci.go:103] Successfully created a docker volume embed-certs-386191
	I1202 20:55:50.544384  761851 cli_runner.go:164] Run: docker run --rm --name embed-certs-386191-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-386191 --entrypoint /usr/bin/test -v embed-certs-386191:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 20:55:51.390297  761851 oci.go:107] Successfully prepared a docker volume embed-certs-386191
	I1202 20:55:51.390398  761851 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:51.390416  761851 kic.go:194] Starting extracting preloaded images to volume ...
	I1202 20:55:51.390490  761851 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-386191:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	W1202 20:55:51.979014  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:55:54.048006  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:55:54.222552  759377 node_ready.go:49] node "default-k8s-diff-port-997805" is "Ready"
	I1202 20:55:54.222597  759377 node_ready.go:38] duration metric: took 1.417304277s for node "default-k8s-diff-port-997805" to be "Ready" ...
	I1202 20:55:54.222616  759377 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:55:54.222680  759377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:55:55.521273  759377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.733090646s)
	I1202 20:55:55.521348  759377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.707827699s)
	I1202 20:55:55.956240  759377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.984189677s)
	I1202 20:55:55.956260  759377 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.733551247s)
	I1202 20:55:55.956296  759377 api_server.go:72] duration metric: took 3.373517458s to wait for apiserver process to appear ...
	I1202 20:55:55.956305  759377 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:55:55.956329  759377 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1202 20:55:55.957591  759377 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-997805 addons enable metrics-server
	
	I1202 20:55:55.960080  759377 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1202 20:55:55.961425  759377 addons.go:530] duration metric: took 3.378380909s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1202 20:55:55.963108  759377 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 20:55:55.963149  759377 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 20:55:56.456815  759377 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1202 20:55:56.464867  759377 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1202 20:55:56.466374  759377 api_server.go:141] control plane version: v1.34.2
	I1202 20:55:56.466405  759377 api_server.go:131] duration metric: took 510.092ms to wait for apiserver health ...
	I1202 20:55:56.466417  759377 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:55:56.470286  759377 system_pods.go:59] 8 kube-system pods found
	I1202 20:55:56.470321  759377 system_pods.go:61] "coredns-66bc5c9577-jrln7" [37de7399-6357-4f08-9240-fc9e0d884f47] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:55:56.470336  759377 system_pods.go:61] "etcd-default-k8s-diff-port-997805" [446321a2-6abf-495a-8198-da2e43d8c18d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:55:56.470354  759377 system_pods.go:61] "kindnet-rzqpn" [eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 20:55:56.470364  759377 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-997805" [04c7ea7b-9b4e-44ee-8148-107daccf6b7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:55:56.470376  759377 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-997805" [65af3257-fa45-4d8c-bab3-46c0390ab8df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:55:56.470395  759377 system_pods.go:61] "kube-proxy-s2jpn" [407f6b3c-8d8b-47b0-b994-c061eedc6420] Running
	I1202 20:55:56.470403  759377 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-997805" [803a952f-54ba-4326-b70f-907fc7db368e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:55:56.470411  759377 system_pods.go:61] "storage-provisioner" [08893b97-1192-4f4e-8636-8f2ba82c853d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:55:56.470419  759377 system_pods.go:74] duration metric: took 3.994668ms to wait for pod list to return data ...
	I1202 20:55:56.470434  759377 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:55:56.472796  759377 default_sa.go:45] found service account: "default"
	I1202 20:55:56.472821  759377 default_sa.go:55] duration metric: took 2.376879ms for default service account to be created ...
	I1202 20:55:56.472832  759377 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 20:55:56.476530  759377 system_pods.go:86] 8 kube-system pods found
	I1202 20:55:56.476568  759377 system_pods.go:89] "coredns-66bc5c9577-jrln7" [37de7399-6357-4f08-9240-fc9e0d884f47] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:55:56.476586  759377 system_pods.go:89] "etcd-default-k8s-diff-port-997805" [446321a2-6abf-495a-8198-da2e43d8c18d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:55:56.476598  759377 system_pods.go:89] "kindnet-rzqpn" [eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 20:55:56.476611  759377 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-997805" [04c7ea7b-9b4e-44ee-8148-107daccf6b7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:55:56.476622  759377 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-997805" [65af3257-fa45-4d8c-bab3-46c0390ab8df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:55:56.476636  759377 system_pods.go:89] "kube-proxy-s2jpn" [407f6b3c-8d8b-47b0-b994-c061eedc6420] Running
	I1202 20:55:56.476644  759377 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-997805" [803a952f-54ba-4326-b70f-907fc7db368e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:55:56.476652  759377 system_pods.go:89] "storage-provisioner" [08893b97-1192-4f4e-8636-8f2ba82c853d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:55:56.476666  759377 system_pods.go:126] duration metric: took 3.826088ms to wait for k8s-apps to be running ...
	I1202 20:55:56.476679  759377 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 20:55:56.476731  759377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:55:56.496595  759377 system_svc.go:56] duration metric: took 19.904103ms WaitForService to wait for kubelet
	I1202 20:55:56.496628  759377 kubeadm.go:587] duration metric: took 3.913848958s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:55:56.496651  759377 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:55:56.501320  759377 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 20:55:56.501357  759377 node_conditions.go:123] node cpu capacity is 8
	I1202 20:55:56.501378  759377 node_conditions.go:105] duration metric: took 4.719966ms to run NodePressure ...
	I1202 20:55:56.501394  759377 start.go:242] waiting for startup goroutines ...
	I1202 20:55:56.501406  759377 start.go:247] waiting for cluster config update ...
	I1202 20:55:56.501422  759377 start.go:256] writing updated cluster config ...
	I1202 20:55:56.501764  759377 ssh_runner.go:195] Run: rm -f paused
	I1202 20:55:56.507506  759377 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:55:56.511978  759377 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jrln7" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 20:55:58.518638  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:55:55.882395  761851 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-386191:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.491855191s)
	I1202 20:55:55.882432  761851 kic.go:203] duration metric: took 4.49201135s to extract preloaded images to volume ...
	W1202 20:55:55.882649  761851 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1202 20:55:55.882730  761851 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1202 20:55:55.882796  761851 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 20:55:55.970786  761851 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-386191 --name embed-certs-386191 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-386191 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-386191 --network embed-certs-386191 --ip 192.168.103.2 --volume embed-certs-386191:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 20:55:56.322797  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Running}}
	I1202 20:55:56.346318  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:55:56.369508  761851 cli_runner.go:164] Run: docker exec embed-certs-386191 stat /var/lib/dpkg/alternatives/iptables
	I1202 20:55:56.426161  761851 oci.go:144] the created container "embed-certs-386191" has a running status.
	I1202 20:55:56.426198  761851 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa...
	I1202 20:55:56.605690  761851 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 20:55:56.639247  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:55:56.661049  761851 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 20:55:56.661086  761851 kic_runner.go:114] Args: [docker exec --privileged embed-certs-386191 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 20:55:56.743919  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:55:56.771200  761851 machine.go:94] provisionDockerMachine start ...
	I1202 20:55:56.771338  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:56.796209  761851 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:56.796568  761851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33513 <nil> <nil>}
	I1202 20:55:56.796593  761851 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:55:56.950615  761851 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-386191
	
	I1202 20:55:56.950657  761851 ubuntu.go:182] provisioning hostname "embed-certs-386191"
	I1202 20:55:56.950733  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:56.973211  761851 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:56.973537  761851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33513 <nil> <nil>}
	I1202 20:55:56.973561  761851 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-386191 && echo "embed-certs-386191" | sudo tee /etc/hostname
	I1202 20:55:57.141391  761851 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-386191
	
	I1202 20:55:57.141500  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:57.162911  761851 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:57.163198  761851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33513 <nil> <nil>}
	I1202 20:55:57.163228  761851 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-386191' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-386191/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-386191' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:55:57.310513  761851 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:55:57.310553  761851 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-407427/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-407427/.minikube}
	I1202 20:55:57.310589  761851 ubuntu.go:190] setting up certificates
	I1202 20:55:57.310609  761851 provision.go:84] configureAuth start
	I1202 20:55:57.310699  761851 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-386191
	I1202 20:55:57.331293  761851 provision.go:143] copyHostCerts
	I1202 20:55:57.331361  761851 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem, removing ...
	I1202 20:55:57.331377  761851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem
	I1202 20:55:57.331457  761851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem (1123 bytes)
	I1202 20:55:57.331608  761851 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem, removing ...
	I1202 20:55:57.331619  761851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem
	I1202 20:55:57.331661  761851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem (1675 bytes)
	I1202 20:55:57.331806  761851 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem, removing ...
	I1202 20:55:57.331820  761851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem
	I1202 20:55:57.331861  761851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem (1082 bytes)
	I1202 20:55:57.331969  761851 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem org=jenkins.embed-certs-386191 san=[127.0.0.1 192.168.103.2 embed-certs-386191 localhost minikube]
	I1202 20:55:57.478343  761851 provision.go:177] copyRemoteCerts
	I1202 20:55:57.478412  761851 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:55:57.478461  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:57.503684  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:55:57.613653  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:55:57.638025  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1202 20:55:57.660295  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 20:55:57.684474  761851 provision.go:87] duration metric: took 373.842939ms to configureAuth
	I1202 20:55:57.684512  761851 ubuntu.go:206] setting minikube options for container-runtime
	I1202 20:55:57.684722  761851 config.go:182] Loaded profile config "embed-certs-386191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:55:57.684859  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:57.705791  761851 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:57.706104  761851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33513 <nil> <nil>}
	I1202 20:55:57.706127  761851 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:55:58.017837  761851 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:55:58.017867  761851 machine.go:97] duration metric: took 1.246644154s to provisionDockerMachine
	I1202 20:55:58.017881  761851 client.go:176] duration metric: took 7.756854866s to LocalClient.Create
	I1202 20:55:58.017904  761851 start.go:167] duration metric: took 7.756953433s to libmachine.API.Create "embed-certs-386191"
	I1202 20:55:58.017914  761851 start.go:293] postStartSetup for "embed-certs-386191" (driver="docker")
	I1202 20:55:58.017926  761851 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:55:58.017993  761851 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:55:58.018051  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:58.040966  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:55:58.164646  761851 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:55:58.169173  761851 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:55:58.169218  761851 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:55:58.169234  761851 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 20:55:58.169292  761851 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 20:55:58.169398  761851 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem -> 4110322.pem in /etc/ssl/certs
	I1202 20:55:58.169534  761851 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:55:58.178343  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:55:58.201537  761851 start.go:296] duration metric: took 183.605841ms for postStartSetup
	I1202 20:55:58.201980  761851 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-386191
	I1202 20:55:58.222381  761851 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/config.json ...
	I1202 20:55:58.222725  761851 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:55:58.222779  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:58.246974  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:55:58.349308  761851 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:55:58.354335  761851 start.go:128] duration metric: took 8.098942472s to createHost
	I1202 20:55:58.354367  761851 start.go:83] releasing machines lock for "embed-certs-386191", held for 8.099141281s
	I1202 20:55:58.354452  761851 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-386191
	I1202 20:55:58.375692  761851 ssh_runner.go:195] Run: cat /version.json
	I1202 20:55:58.375743  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:58.375778  761851 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:55:58.375875  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:58.399444  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:55:58.401096  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:55:58.567709  761851 ssh_runner.go:195] Run: systemctl --version
	I1202 20:55:58.576291  761851 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:55:58.616262  761851 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:55:58.621961  761851 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:55:58.622044  761851 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:55:58.651183  761851 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 20:55:58.651217  761851 start.go:496] detecting cgroup driver to use...
	I1202 20:55:58.651265  761851 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 20:55:58.651331  761851 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:55:58.670441  761851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:55:58.684478  761851 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:55:58.684542  761851 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:55:58.704480  761851 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:55:58.725624  761851 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:55:58.831744  761851 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:55:58.927526  761851 docker.go:234] disabling docker service ...
	I1202 20:55:58.927588  761851 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:55:58.947085  761851 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:55:58.961716  761851 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:55:59.059830  761851 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:55:59.155836  761851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:55:59.170575  761851 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:55:59.187647  761851 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:55:59.187711  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.199691  761851 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 20:55:59.199752  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.210377  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.221666  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.233039  761851 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:55:59.242836  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.252564  761851 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.268580  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.279302  761851 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:55:59.288550  761851 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:55:59.297166  761851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:59.384478  761851 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:55:59.534012  761851 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:55:59.534100  761851 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:55:59.538865  761851 start.go:564] Will wait 60s for crictl version
	I1202 20:55:59.538929  761851 ssh_runner.go:195] Run: which crictl
	I1202 20:55:59.542822  761851 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:55:59.570175  761851 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:55:59.570275  761851 ssh_runner.go:195] Run: crio --version
	I1202 20:55:59.600365  761851 ssh_runner.go:195] Run: crio --version
	I1202 20:55:59.632281  761851 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 20:55:59.633569  761851 cli_runner.go:164] Run: docker network inspect embed-certs-386191 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:55:59.653989  761851 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1202 20:55:59.659705  761851 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:59.673939  761851 kubeadm.go:884] updating cluster {Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:55:59.674148  761851 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:59.674231  761851 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:59.721572  761851 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:59.721623  761851 crio.go:433] Images already preloaded, skipping extraction
	I1202 20:55:59.721807  761851 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:59.763726  761851 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:59.763753  761851 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:55:59.763763  761851 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1202 20:55:59.763877  761851 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-386191 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:55:59.763974  761851 ssh_runner.go:195] Run: crio config
	I1202 20:55:59.830764  761851 cni.go:84] Creating CNI manager for ""
	I1202 20:55:59.830790  761851 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:59.830809  761851 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:55:59.830832  761851 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-386191 NodeName:embed-certs-386191 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:55:59.830950  761851 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-386191"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:55:59.831035  761851 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 20:55:59.841880  761851 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:55:59.841954  761851 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:55:59.852027  761851 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1202 20:55:59.869099  761851 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:55:59.889821  761851 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1202 20:55:59.907811  761851 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1202 20:55:59.913347  761851 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:59.927373  761851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W1202 20:55:56.478639  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:55:58.978346  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:56:00.050556  761851 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:56:00.077300  761851 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191 for IP: 192.168.103.2
	I1202 20:56:00.077325  761851 certs.go:195] generating shared ca certs ...
	I1202 20:56:00.077348  761851 certs.go:227] acquiring lock for ca certs: {Name:mkea11924193dd4dc459e16d70207bde383196df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.077530  761851 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key
	I1202 20:56:00.077575  761851 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key
	I1202 20:56:00.077588  761851 certs.go:257] generating profile certs ...
	I1202 20:56:00.077664  761851 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.key
	I1202 20:56:00.077682  761851 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.crt with IP's: []
	I1202 20:56:00.252632  761851 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.crt ...
	I1202 20:56:00.252663  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.crt: {Name:mk9d10e4646efb676095250174819771b143a8ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.252877  761851 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.key ...
	I1202 20:56:00.252896  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.key: {Name:mk09798c33ea1ea9f8eb08ebf47349e244c0760e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.253023  761851 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key.1b423d29
	I1202 20:56:00.253048  761851 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt.1b423d29 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1202 20:56:00.432017  761851 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt.1b423d29 ...
	I1202 20:56:00.432052  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt.1b423d29: {Name:mk6d91134ec48be46c0e886b478e71e1794c3cdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.432278  761851 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key.1b423d29 ...
	I1202 20:56:00.432302  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key.1b423d29: {Name:mk97fa0403fe534a503bf999364704991b597622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.432413  761851 certs.go:382] copying /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt.1b423d29 -> /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt
	I1202 20:56:00.432512  761851 certs.go:386] copying /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key.1b423d29 -> /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key
	I1202 20:56:00.432593  761851 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.key
	I1202 20:56:00.432619  761851 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.crt with IP's: []
	I1202 20:56:00.527766  761851 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.crt ...
	I1202 20:56:00.527802  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.crt: {Name:mke9848302a1327d00a26fb35bc8d56284a1ca08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.528029  761851 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.key ...
	I1202 20:56:00.528053  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.key: {Name:mk5b412430aa6855d80ede6a2641ba2256c9a484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.528324  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem (1338 bytes)
	W1202 20:56:00.528374  761851 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032_empty.pem, impossibly tiny 0 bytes
	I1202 20:56:00.528390  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 20:56:00.528423  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:56:00.528455  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:56:00.528493  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem (1675 bytes)
	I1202 20:56:00.528552  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:56:00.529432  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:56:00.554691  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:56:00.580499  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:56:00.606002  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:56:00.630389  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1202 20:56:00.655553  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 20:56:00.679419  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:56:00.704325  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 20:56:00.729255  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /usr/share/ca-certificates/4110322.pem (1708 bytes)
	I1202 20:56:00.757910  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:56:00.782959  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem --> /usr/share/ca-certificates/411032.pem (1338 bytes)
	I1202 20:56:00.808564  761851 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:56:00.828291  761851 ssh_runner.go:195] Run: openssl version
	I1202 20:56:00.836796  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110322.pem && ln -fs /usr/share/ca-certificates/4110322.pem /etc/ssl/certs/4110322.pem"
	I1202 20:56:00.848469  761851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110322.pem
	I1202 20:56:00.853715  761851 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 20:13 /usr/share/ca-certificates/4110322.pem
	I1202 20:56:00.853790  761851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110322.pem
	I1202 20:56:00.905576  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4110322.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:56:00.918463  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:56:00.930339  761851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:56:00.935452  761851 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:56:00.935522  761851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:56:00.990051  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:56:01.002960  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/411032.pem && ln -fs /usr/share/ca-certificates/411032.pem /etc/ssl/certs/411032.pem"
	I1202 20:56:01.013994  761851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/411032.pem
	I1202 20:56:01.019737  761851 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 20:13 /usr/share/ca-certificates/411032.pem
	I1202 20:56:01.019798  761851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/411032.pem
	I1202 20:56:01.062700  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/411032.pem /etc/ssl/certs/51391683.0"
	I1202 20:56:01.074487  761851 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:56:01.079958  761851 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 20:56:01.080033  761851 kubeadm.go:401] StartCluster: {Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:56:01.080164  761851 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:56:01.080231  761851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:56:01.119713  761851 cri.go:89] found id: ""
	I1202 20:56:01.122354  761851 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:56:01.160024  761851 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 20:56:01.174466  761851 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 20:56:01.174517  761851 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 20:56:01.186198  761851 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 20:56:01.186294  761851 kubeadm.go:158] found existing configuration files:
	
	I1202 20:56:01.186361  761851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 20:56:01.201548  761851 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 20:56:01.201623  761851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 20:56:01.214153  761851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 20:56:01.225107  761851 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 20:56:01.225225  761851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 20:56:01.236050  761851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 20:56:01.247714  761851 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 20:56:01.247785  761851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 20:56:01.259129  761851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 20:56:01.270914  761851 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 20:56:01.270981  761851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 20:56:01.283320  761851 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 20:56:01.344042  761851 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1202 20:56:01.344150  761851 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 20:56:01.374696  761851 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 20:56:01.374786  761851 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1202 20:56:01.374832  761851 kubeadm.go:319] OS: Linux
	I1202 20:56:01.374904  761851 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 20:56:01.374965  761851 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 20:56:01.375027  761851 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 20:56:01.375100  761851 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 20:56:01.375165  761851 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 20:56:01.375227  761851 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 20:56:01.375295  761851 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 20:56:01.375351  761851 kubeadm.go:319] CGROUPS_IO: enabled
	I1202 20:56:01.461671  761851 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 20:56:01.461847  761851 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 20:56:01.462101  761851 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 20:56:01.473475  761851 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1202 20:56:00.519234  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:03.019288  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:56:01.478718  761851 out.go:252]   - Generating certificates and keys ...
	I1202 20:56:01.478829  761851 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 20:56:01.478911  761851 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 20:56:01.668758  761851 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 20:56:01.829895  761851 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 20:56:02.005376  761851 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 20:56:02.862909  761851 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 20:56:03.307052  761851 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 20:56:03.307703  761851 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-386191 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1202 20:56:03.383959  761851 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 20:56:03.384496  761851 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-386191 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1202 20:56:03.508307  761851 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 20:56:04.670556  761851 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 20:56:04.823930  761851 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 20:56:04.824007  761851 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	W1202 20:56:00.979309  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:56:02.980313  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:56:05.478729  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:56:05.205466  761851 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 20:56:05.375427  761851 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 20:56:05.434193  761851 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 20:56:05.863197  761851 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 20:56:06.053990  761851 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 20:56:06.054504  761851 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 20:56:06.058651  761851 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1202 20:56:05.517785  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:07.518439  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:56:06.060126  761851 out.go:252]   - Booting up control plane ...
	I1202 20:56:06.060244  761851 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 20:56:06.060364  761851 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 20:56:06.061268  761851 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 20:56:06.095037  761851 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 20:56:06.095189  761851 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 20:56:06.102515  761851 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 20:56:06.102696  761851 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 20:56:06.102769  761851 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 20:56:06.205490  761851 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 20:56:06.205715  761851 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 20:56:07.205674  761851 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001810301s
	I1202 20:56:07.209848  761851 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 20:56:07.210052  761851 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1202 20:56:07.210217  761851 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 20:56:07.210338  761851 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1202 20:56:08.756010  761851 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.546069674s
	I1202 20:56:09.869674  761851 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.659323153s
	W1202 20:56:07.979740  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:56:10.478689  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:56:11.711917  761851 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502061899s
	I1202 20:56:11.728157  761851 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 20:56:11.740906  761851 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 20:56:11.753231  761851 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 20:56:11.753530  761851 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-386191 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 20:56:11.764705  761851 kubeadm.go:319] [bootstrap-token] Using token: c8uju2.57r80hlp0isn29k2
	I1202 20:56:11.766183  761851 out.go:252]   - Configuring RBAC rules ...
	I1202 20:56:11.766339  761851 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 20:56:11.770506  761851 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 20:56:11.777525  761851 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 20:56:11.780772  761851 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 20:56:11.785459  761851 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 20:56:11.788963  761851 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 20:56:12.119080  761851 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 20:56:12.539952  761851 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 20:56:13.118875  761851 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 20:56:13.119856  761851 kubeadm.go:319] 
	I1202 20:56:13.119972  761851 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 20:56:13.119991  761851 kubeadm.go:319] 
	I1202 20:56:13.120096  761851 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 20:56:13.120106  761851 kubeadm.go:319] 
	I1202 20:56:13.120132  761851 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 20:56:13.120189  761851 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 20:56:13.120239  761851 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 20:56:13.120250  761851 kubeadm.go:319] 
	I1202 20:56:13.120296  761851 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 20:56:13.120303  761851 kubeadm.go:319] 
	I1202 20:56:13.120350  761851 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 20:56:13.120356  761851 kubeadm.go:319] 
	I1202 20:56:13.120405  761851 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 20:56:13.120480  761851 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 20:56:13.120550  761851 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 20:56:13.120559  761851 kubeadm.go:319] 
	I1202 20:56:13.120655  761851 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 20:56:13.120760  761851 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 20:56:13.120770  761851 kubeadm.go:319] 
	I1202 20:56:13.120947  761851 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token c8uju2.57r80hlp0isn29k2 \
	I1202 20:56:13.121116  761851 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:48349ffa2369b87cd15480ece56edb1c5c5d97a828930f9dfbf1d1ceccf80ff4 \
	I1202 20:56:13.121150  761851 kubeadm.go:319] 	--control-plane 
	I1202 20:56:13.121158  761851 kubeadm.go:319] 
	I1202 20:56:13.121277  761851 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 20:56:13.121292  761851 kubeadm.go:319] 
	I1202 20:56:13.121403  761851 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token c8uju2.57r80hlp0isn29k2 \
	I1202 20:56:13.121546  761851 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:48349ffa2369b87cd15480ece56edb1c5c5d97a828930f9dfbf1d1ceccf80ff4 
	I1202 20:56:13.124563  761851 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1202 20:56:13.124664  761851 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 20:56:13.124688  761851 cni.go:84] Creating CNI manager for ""
	I1202 20:56:13.124700  761851 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:56:13.126500  761851 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1202 20:56:10.017702  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:12.018270  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:56:13.128206  761851 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 20:56:13.133011  761851 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1202 20:56:13.133036  761851 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 20:56:13.147210  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 20:56:13.367880  761851 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 20:56:13.368008  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:13.368037  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-386191 minikube.k8s.io/updated_at=2025_12_02T20_56_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92 minikube.k8s.io/name=embed-certs-386191 minikube.k8s.io/primary=true
	I1202 20:56:13.378170  761851 ops.go:34] apiserver oom_adj: -16
	I1202 20:56:13.456213  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:13.956791  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:14.456911  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:14.957002  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1202 20:56:12.481885  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:56:14.478647  754876 pod_ready.go:94] pod "coredns-7d764666f9-ghxk6" is "Ready"
	I1202 20:56:14.478679  754876 pod_ready.go:86] duration metric: took 33.50633852s for pod "coredns-7d764666f9-ghxk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.481510  754876 pod_ready.go:83] waiting for pod "etcd-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.487252  754876 pod_ready.go:94] pod "etcd-no-preload-336331" is "Ready"
	I1202 20:56:14.487284  754876 pod_ready.go:86] duration metric: took 5.742661ms for pod "etcd-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.489709  754876 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.493975  754876 pod_ready.go:94] pod "kube-apiserver-no-preload-336331" is "Ready"
	I1202 20:56:14.494030  754876 pod_ready.go:86] duration metric: took 4.293005ms for pod "kube-apiserver-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.496555  754876 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.676017  754876 pod_ready.go:94] pod "kube-controller-manager-no-preload-336331" is "Ready"
	I1202 20:56:14.676054  754876 pod_ready.go:86] duration metric: took 179.468852ms for pod "kube-controller-manager-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.876507  754876 pod_ready.go:83] waiting for pod "kube-proxy-qc2v9" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:15.276156  754876 pod_ready.go:94] pod "kube-proxy-qc2v9" is "Ready"
	I1202 20:56:15.276184  754876 pod_ready.go:86] duration metric: took 399.652639ms for pod "kube-proxy-qc2v9" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:15.476929  754876 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:15.876785  754876 pod_ready.go:94] pod "kube-scheduler-no-preload-336331" is "Ready"
	I1202 20:56:15.876821  754876 pod_ready.go:86] duration metric: took 399.859554ms for pod "kube-scheduler-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:15.876837  754876 pod_ready.go:40] duration metric: took 34.909444308s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:56:15.923408  754876 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1202 20:56:15.925124  754876 out.go:179] * Done! kubectl is now configured to use "no-preload-336331" cluster and "default" namespace by default
	I1202 20:56:15.457186  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:15.957341  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:16.456356  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:16.956786  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:17.457273  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:17.529683  761851 kubeadm.go:1114] duration metric: took 4.161789754s to wait for elevateKubeSystemPrivileges
	I1202 20:56:17.529733  761851 kubeadm.go:403] duration metric: took 16.449707403s to StartCluster
	I1202 20:56:17.529758  761851 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:17.529828  761851 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:56:17.531386  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:17.531613  761851 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 20:56:17.531617  761851 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:56:17.531699  761851 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:56:17.531801  761851 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-386191"
	I1202 20:56:17.531817  761851 config.go:182] Loaded profile config "embed-certs-386191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:56:17.531839  761851 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-386191"
	I1202 20:56:17.531817  761851 addons.go:70] Setting default-storageclass=true in profile "embed-certs-386191"
	I1202 20:56:17.531877  761851 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-386191"
	I1202 20:56:17.531882  761851 host.go:66] Checking if "embed-certs-386191" exists ...
	I1202 20:56:17.532342  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:56:17.532507  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:56:17.534531  761851 out.go:179] * Verifying Kubernetes components...
	I1202 20:56:17.535950  761851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:56:17.558800  761851 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:56:17.560025  761851 addons.go:239] Setting addon default-storageclass=true in "embed-certs-386191"
	I1202 20:56:17.560084  761851 host.go:66] Checking if "embed-certs-386191" exists ...
	I1202 20:56:17.560580  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:56:17.561225  761851 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:56:17.561246  761851 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:56:17.561324  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:56:17.590711  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:56:17.592956  761851 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:56:17.592992  761851 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:56:17.593051  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:56:17.617931  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:56:17.638614  761851 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 20:56:17.681673  761851 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:56:17.712144  761851 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:56:17.735866  761851 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:56:17.815035  761851 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1202 20:56:17.816483  761851 node_ready.go:35] waiting up to 6m0s for node "embed-certs-386191" to be "Ready" ...
	I1202 20:56:18.003767  761851 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1202 20:56:14.018515  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:16.020009  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:18.517905  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:56:18.004793  761851 addons.go:530] duration metric: took 473.08842ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1202 20:56:18.319554  761851 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-386191" context rescaled to 1 replicas
	W1202 20:56:19.820111  761851 node_ready.go:57] node "embed-certs-386191" has "Ready":"False" status (will retry)
	W1202 20:56:21.019501  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:23.518373  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:22.320036  761851 node_ready.go:57] node "embed-certs-386191" has "Ready":"False" status (will retry)
	W1202 20:56:24.320559  761851 node_ready.go:57] node "embed-certs-386191" has "Ready":"False" status (will retry)
	W1202 20:56:26.018767  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:28.019223  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:26.320730  761851 node_ready.go:57] node "embed-certs-386191" has "Ready":"False" status (will retry)
	W1202 20:56:28.820145  761851 node_ready.go:57] node "embed-certs-386191" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 02 20:55:57 no-preload-336331 crio[567]: time="2025-12-02T20:55:57.719458696Z" level=info msg="Started container" PID=1739 containerID=65a98944e23b2051f5d2803b7cd4f48cd36fbd6fd8863e62252f9e57766b98ad description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4/dashboard-metrics-scraper id=92a1ecf7-01d5-4ec5-b271-a179b3c47a41 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3dab1966c7713a84ff487c510cbd7211f3f6958e62f55dd3b50773368e24fdea
	Dec 02 20:55:57 no-preload-336331 crio[567]: time="2025-12-02T20:55:57.766524766Z" level=info msg="Removing container: 2c5a874275c99b8ae5c4236310bf903c2bb613d66005d99b341d04c382954a4c" id=cf0116f9-11fe-4ee8-8815-2ca38ee9d31c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 20:55:57 no-preload-336331 crio[567]: time="2025-12-02T20:55:57.777595312Z" level=info msg="Removed container 2c5a874275c99b8ae5c4236310bf903c2bb613d66005d99b341d04c382954a4c: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4/dashboard-metrics-scraper" id=cf0116f9-11fe-4ee8-8815-2ca38ee9d31c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 20:56:10 no-preload-336331 crio[567]: time="2025-12-02T20:56:10.808732694Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=20bc2c48-0df6-4298-8d3a-54914f8033fe name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:56:10 no-preload-336331 crio[567]: time="2025-12-02T20:56:10.809749595Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3af65e13-27b0-4ceb-b127-1e547de16b43 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:56:10 no-preload-336331 crio[567]: time="2025-12-02T20:56:10.810858267Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f33d71fd-19a8-4c88-8df1-59f3d44ef85a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:56:10 no-preload-336331 crio[567]: time="2025-12-02T20:56:10.811020282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:10 no-preload-336331 crio[567]: time="2025-12-02T20:56:10.815140415Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:10 no-preload-336331 crio[567]: time="2025-12-02T20:56:10.815359705Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9bf7b5f56bdafbf738d4a385934c914be8bbfd1f7e63a6311be0fc3c81950523/merged/etc/passwd: no such file or directory"
	Dec 02 20:56:10 no-preload-336331 crio[567]: time="2025-12-02T20:56:10.815396694Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9bf7b5f56bdafbf738d4a385934c914be8bbfd1f7e63a6311be0fc3c81950523/merged/etc/group: no such file or directory"
	Dec 02 20:56:10 no-preload-336331 crio[567]: time="2025-12-02T20:56:10.815698732Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:10 no-preload-336331 crio[567]: time="2025-12-02T20:56:10.844850747Z" level=info msg="Created container 6a58ab9b9c0482bd5f103029c7c0f3bdb6d5c02e0fc49f59a43c1b17c375958e: kube-system/storage-provisioner/storage-provisioner" id=f33d71fd-19a8-4c88-8df1-59f3d44ef85a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:56:10 no-preload-336331 crio[567]: time="2025-12-02T20:56:10.845604487Z" level=info msg="Starting container: 6a58ab9b9c0482bd5f103029c7c0f3bdb6d5c02e0fc49f59a43c1b17c375958e" id=9cb202c7-21dc-4ba1-addd-39ee1b88fc88 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:56:10 no-preload-336331 crio[567]: time="2025-12-02T20:56:10.847929485Z" level=info msg="Started container" PID=1757 containerID=6a58ab9b9c0482bd5f103029c7c0f3bdb6d5c02e0fc49f59a43c1b17c375958e description=kube-system/storage-provisioner/storage-provisioner id=9cb202c7-21dc-4ba1-addd-39ee1b88fc88 name=/runtime.v1.RuntimeService/StartContainer sandboxID=247c55173c29d3744d6e9d786b5583c5b587877b11e48d4fe03ea16eeb0d052e
	Dec 02 20:56:21 no-preload-336331 crio[567]: time="2025-12-02T20:56:21.671769143Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ab3814fc-1d04-4252-9bd2-b9236e2d292e name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:56:21 no-preload-336331 crio[567]: time="2025-12-02T20:56:21.672877432Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fadb8996-012d-4100-a023-4be609bd4340 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:56:21 no-preload-336331 crio[567]: time="2025-12-02T20:56:21.673871497Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4/dashboard-metrics-scraper" id=909aa2ad-dbf6-4695-a39d-210107f4165f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:56:21 no-preload-336331 crio[567]: time="2025-12-02T20:56:21.674035377Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:21 no-preload-336331 crio[567]: time="2025-12-02T20:56:21.68161873Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:21 no-preload-336331 crio[567]: time="2025-12-02T20:56:21.682243573Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:21 no-preload-336331 crio[567]: time="2025-12-02T20:56:21.717256159Z" level=info msg="Created container a466626000385a894ca35d0cbbd705f0a7ea58df0bec6d3ee73e98444e45ee26: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4/dashboard-metrics-scraper" id=909aa2ad-dbf6-4695-a39d-210107f4165f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:56:21 no-preload-336331 crio[567]: time="2025-12-02T20:56:21.718021039Z" level=info msg="Starting container: a466626000385a894ca35d0cbbd705f0a7ea58df0bec6d3ee73e98444e45ee26" id=c34ec2cb-ed34-433b-934b-4dbeba7b4a06 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:56:21 no-preload-336331 crio[567]: time="2025-12-02T20:56:21.719868279Z" level=info msg="Started container" PID=1792 containerID=a466626000385a894ca35d0cbbd705f0a7ea58df0bec6d3ee73e98444e45ee26 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4/dashboard-metrics-scraper id=c34ec2cb-ed34-433b-934b-4dbeba7b4a06 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3dab1966c7713a84ff487c510cbd7211f3f6958e62f55dd3b50773368e24fdea
	Dec 02 20:56:21 no-preload-336331 crio[567]: time="2025-12-02T20:56:21.841901993Z" level=info msg="Removing container: 65a98944e23b2051f5d2803b7cd4f48cd36fbd6fd8863e62252f9e57766b98ad" id=54aa0970-a3b9-4d90-bebf-2ebb18b43307 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 20:56:21 no-preload-336331 crio[567]: time="2025-12-02T20:56:21.852500794Z" level=info msg="Removed container 65a98944e23b2051f5d2803b7cd4f48cd36fbd6fd8863e62252f9e57766b98ad: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4/dashboard-metrics-scraper" id=54aa0970-a3b9-4d90-bebf-2ebb18b43307 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a466626000385       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   3                   3dab1966c7713       dashboard-metrics-scraper-867fb5f87b-nh2q4   kubernetes-dashboard
	6a58ab9b9c048       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   247c55173c29d       storage-provisioner                          kube-system
	392425906cb89       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   8b3d9c6e13687       kubernetes-dashboard-b84665fb8-njbfb         kubernetes-dashboard
	88167c0c54572       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           53 seconds ago      Running             coredns                     0                   f2acf59363960       coredns-7d764666f9-ghxk6                     kube-system
	14998959c7ac1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   bd8482640477a       busybox                                      default
	a8e5580b374ec       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   698949487dc26       kindnet-5blk7                                kube-system
	80e9078d18ca5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   247c55173c29d       storage-provisioner                          kube-system
	4ba2da46b3cf6       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           53 seconds ago      Running             kube-proxy                  0                   e141c1deb2aa7       kube-proxy-qc2v9                             kube-system
	fe483c8206ed4       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           55 seconds ago      Running             etcd                        0                   9d5f6bc769821       etcd-no-preload-336331                       kube-system
	8a39789ad0781       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           55 seconds ago      Running             kube-controller-manager     0                   1a180f6e9da0d       kube-controller-manager-no-preload-336331    kube-system
	cec9f1979d354       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           55 seconds ago      Running             kube-scheduler              0                   d70011c809154       kube-scheduler-no-preload-336331             kube-system
	9d960cc48cf5c       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           55 seconds ago      Running             kube-apiserver              0                   141b2cc839d51       kube-apiserver-no-preload-336331             kube-system
	
	
	==> coredns [88167c0c5457270abf23d0e9c8ba2c26bd39e3fd35acbcc5be0ec9337db24e9e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:55185 - 37920 "HINFO IN 4806991979558089534.5762877512650251915. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027381649s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-336331
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-336331
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=no-preload-336331
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T20_54_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 20:54:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-336331
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:56:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:56:10 +0000   Tue, 02 Dec 2025 20:54:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:56:10 +0000   Tue, 02 Dec 2025 20:54:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:56:10 +0000   Tue, 02 Dec 2025 20:54:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:56:10 +0000   Tue, 02 Dec 2025 20:54:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-336331
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                3a1272e4-255b-4719-83a7-b5faa7d71457
	  Boot ID:                    9dd5456f-e394-4b4b-9458-48d26faf507e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-7d764666f9-ghxk6                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-no-preload-336331                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-5blk7                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-no-preload-336331              250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-no-preload-336331     200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-qc2v9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-no-preload-336331              100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-nh2q4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-njbfb          0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  110s  node-controller  Node no-preload-336331 event: Registered Node no-preload-336331 in Controller
	  Normal  RegisteredNode  51s   node-controller  Node no-preload-336331 event: Registered Node no-preload-336331 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[ +32.254238] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[Dec 2 20:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 03 bd 14 45 8a 08 06
	[  +0.000590] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 96 27 ad 0d 40 04 08 06
	[Dec 2 20:53] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	[  +0.000700] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 e4 ba c0 78 5f 08 06
	[ +10.119645] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000022] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[  +2.447166] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 df 09 53 d6 6e 08 06
	[  +0.000374] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 8d 06 71 0a 5e 08 06
	[Dec 2 20:54] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 12 47 13 50 f6 bc 08 06
	[  +0.001523] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[ +22.123549] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 0d 45 06 42 2a 08 06
	[  +0.000389] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	
	
	==> etcd [fe483c8206ed4feb9f82c31650dd1c179edfd56fdbd85b46b0866b331f6ea99d] <==
	{"level":"warn","ts":"2025-12-02T20:55:38.923390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:38.937708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:38.946997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:38.956714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:38.964668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:38.975032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:38.983518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:38.992147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.001116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.010267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.019092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.026508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.034719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.043158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.051131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.059607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.067421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.074466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.081587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.088808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.103014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.109590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.116342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:55:39.175169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41000","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T20:55:45.965454Z","caller":"traceutil/trace.go:172","msg":"trace[1181498233] transaction","detail":"{read_only:false; response_revision:549; number_of_response:1; }","duration":"106.052706ms","start":"2025-12-02T20:55:45.859374Z","end":"2025-12-02T20:55:45.965427Z","steps":["trace[1181498233] 'process raft request'  (duration: 71.783873ms)","trace[1181498233] 'compare'  (duration: 34.139386ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:56:33 up  2:38,  0 user,  load average: 4.22, 4.10, 2.72
	Linux no-preload-336331 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a8e5580b374ec483ae25a01dd060fd7f0ac21c7f4a3afc6999fc62d8ede79880] <==
	I1202 20:55:40.373795       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 20:55:40.374118       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1202 20:55:40.374314       1 main.go:148] setting mtu 1500 for CNI 
	I1202 20:55:40.374334       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 20:55:40.374360       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T20:55:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 20:55:40.644139       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 20:55:40.644781       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 20:55:40.644825       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 20:55:40.645080       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 20:55:41.071023       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 20:55:41.071060       1 metrics.go:72] Registering metrics
	I1202 20:55:41.071138       1 controller.go:711] "Syncing nftables rules"
	I1202 20:55:50.644744       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1202 20:55:50.644860       1 main.go:301] handling current node
	I1202 20:56:00.645162       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1202 20:56:00.645210       1 main.go:301] handling current node
	I1202 20:56:10.644395       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1202 20:56:10.644434       1 main.go:301] handling current node
	I1202 20:56:20.644132       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1202 20:56:20.644189       1 main.go:301] handling current node
	I1202 20:56:30.651139       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1202 20:56:30.651178       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9d960cc48cf5c1a7210c34cfa4e205107d9dd729104ed2798e71e12ba001d7ec] <==
	I1202 20:55:39.683936       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1202 20:55:39.683947       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1202 20:55:39.683922       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1202 20:55:39.686139       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 20:55:39.683975       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1202 20:55:39.684611       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1202 20:55:39.685950       1 aggregator.go:187] initial CRD sync complete...
	I1202 20:55:39.686483       1 autoregister_controller.go:144] Starting autoregister controller
	I1202 20:55:39.686501       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 20:55:39.686519       1 cache.go:39] Caches are synced for autoregister controller
	I1202 20:55:39.701144       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1202 20:55:39.707553       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:39.707589       1 policy_source.go:248] refreshing policies
	I1202 20:55:39.714286       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 20:55:39.815541       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 20:55:40.128902       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 20:55:40.180593       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 20:55:40.217425       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 20:55:40.230825       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 20:55:40.302713       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.167.208"}
	I1202 20:55:40.322954       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.127.61"}
	I1202 20:55:40.577156       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1202 20:55:43.316173       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 20:55:43.412589       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 20:55:43.462535       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8a39789ad0781128fb83397c05c270ff26c09bd32ec5d4c90b8ca4d3a01533cd] <==
	I1202 20:55:42.816619       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.815959       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.816948       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.816362       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.817172       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.816363       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.817311       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1202 20:55:42.817404       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-336331"
	I1202 20:55:42.818318       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1202 20:55:42.816381       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.818468       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.817667       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.817527       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.817586       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.817737       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.818157       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.818585       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.819730       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.819763       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.823584       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 20:55:42.823963       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.917906       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:42.917935       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1202 20:55:42.917942       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1202 20:55:42.924759       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [4ba2da46b3cf6cbb497d0561308e1e2541b679b2fee63afd57d239b2a5487d39] <==
	I1202 20:55:40.128839       1 server_linux.go:53] "Using iptables proxy"
	I1202 20:55:40.209286       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 20:55:40.310402       1 shared_informer.go:377] "Caches are synced"
	I1202 20:55:40.310454       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1202 20:55:40.310569       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 20:55:40.338607       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 20:55:40.338676       1 server_linux.go:136] "Using iptables Proxier"
	I1202 20:55:40.345799       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 20:55:40.346346       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 20:55:40.346370       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:55:40.348047       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 20:55:40.348218       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 20:55:40.348325       1 config.go:106] "Starting endpoint slice config controller"
	I1202 20:55:40.348481       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 20:55:40.351508       1 config.go:309] "Starting node config controller"
	I1202 20:55:40.351634       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 20:55:40.351657       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 20:55:40.347900       1 config.go:200] "Starting service config controller"
	I1202 20:55:40.352448       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 20:55:40.452379       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 20:55:40.452392       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 20:55:40.452532       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [cec9f1979d354143b12bba5938c36bf941dd1a2a9c5096761b95b27d36bc9e59] <==
	I1202 20:55:38.474086       1 serving.go:386] Generated self-signed cert in-memory
	W1202 20:55:39.600733       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 20:55:39.600768       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1202 20:55:39.600780       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 20:55:39.600789       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 20:55:39.640554       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1202 20:55:39.641132       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:55:39.645801       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:55:39.645907       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 20:55:39.646531       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 20:55:39.646820       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 20:55:39.746385       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 02 20:55:51 no-preload-336331 kubelet[716]: E1202 20:55:51.747806     716 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-336331" containerName="kube-scheduler"
	Dec 02 20:55:52 no-preload-336331 kubelet[716]: E1202 20:55:52.749419     716 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-njbfb" containerName="kubernetes-dashboard"
	Dec 02 20:55:55 no-preload-336331 kubelet[716]: E1202 20:55:55.584351     716 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-336331" containerName="kube-controller-manager"
	Dec 02 20:55:55 no-preload-336331 kubelet[716]: I1202 20:55:55.632120     716 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-njbfb" podStartSLOduration=5.369292721 podStartE2EDuration="12.632096499s" podCreationTimestamp="2025-12-02 20:55:43 +0000 UTC" firstStartedPulling="2025-12-02 20:55:43.730566146 +0000 UTC m=+6.173422791" lastFinishedPulling="2025-12-02 20:55:50.99336993 +0000 UTC m=+13.436226569" observedRunningTime="2025-12-02 20:55:51.765084253 +0000 UTC m=+14.207940905" watchObservedRunningTime="2025-12-02 20:55:55.632096499 +0000 UTC m=+18.074953155"
	Dec 02 20:55:57 no-preload-336331 kubelet[716]: E1202 20:55:57.670639     716 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4" containerName="dashboard-metrics-scraper"
	Dec 02 20:55:57 no-preload-336331 kubelet[716]: I1202 20:55:57.670678     716 scope.go:122] "RemoveContainer" containerID="2c5a874275c99b8ae5c4236310bf903c2bb613d66005d99b341d04c382954a4c"
	Dec 02 20:55:57 no-preload-336331 kubelet[716]: I1202 20:55:57.764964     716 scope.go:122] "RemoveContainer" containerID="2c5a874275c99b8ae5c4236310bf903c2bb613d66005d99b341d04c382954a4c"
	Dec 02 20:55:57 no-preload-336331 kubelet[716]: E1202 20:55:57.765292     716 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4" containerName="dashboard-metrics-scraper"
	Dec 02 20:55:57 no-preload-336331 kubelet[716]: I1202 20:55:57.765331     716 scope.go:122] "RemoveContainer" containerID="65a98944e23b2051f5d2803b7cd4f48cd36fbd6fd8863e62252f9e57766b98ad"
	Dec 02 20:55:57 no-preload-336331 kubelet[716]: E1202 20:55:57.765529     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nh2q4_kubernetes-dashboard(3114ee57-4f0d-415c-8ca7-2fdbe67e1e5c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4" podUID="3114ee57-4f0d-415c-8ca7-2fdbe67e1e5c"
	Dec 02 20:56:00 no-preload-336331 kubelet[716]: E1202 20:56:00.895700     716 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4" containerName="dashboard-metrics-scraper"
	Dec 02 20:56:00 no-preload-336331 kubelet[716]: I1202 20:56:00.895752     716 scope.go:122] "RemoveContainer" containerID="65a98944e23b2051f5d2803b7cd4f48cd36fbd6fd8863e62252f9e57766b98ad"
	Dec 02 20:56:00 no-preload-336331 kubelet[716]: E1202 20:56:00.895979     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nh2q4_kubernetes-dashboard(3114ee57-4f0d-415c-8ca7-2fdbe67e1e5c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4" podUID="3114ee57-4f0d-415c-8ca7-2fdbe67e1e5c"
	Dec 02 20:56:10 no-preload-336331 kubelet[716]: I1202 20:56:10.808218     716 scope.go:122] "RemoveContainer" containerID="80e9078d18ca5da464c12fd5d5b48d960e2c5867e07585bee911e523c9a0630a"
	Dec 02 20:56:13 no-preload-336331 kubelet[716]: E1202 20:56:13.965399     716 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ghxk6" containerName="coredns"
	Dec 02 20:56:21 no-preload-336331 kubelet[716]: E1202 20:56:21.671150     716 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4" containerName="dashboard-metrics-scraper"
	Dec 02 20:56:21 no-preload-336331 kubelet[716]: I1202 20:56:21.671211     716 scope.go:122] "RemoveContainer" containerID="65a98944e23b2051f5d2803b7cd4f48cd36fbd6fd8863e62252f9e57766b98ad"
	Dec 02 20:56:21 no-preload-336331 kubelet[716]: I1202 20:56:21.840525     716 scope.go:122] "RemoveContainer" containerID="65a98944e23b2051f5d2803b7cd4f48cd36fbd6fd8863e62252f9e57766b98ad"
	Dec 02 20:56:21 no-preload-336331 kubelet[716]: E1202 20:56:21.840807     716 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4" containerName="dashboard-metrics-scraper"
	Dec 02 20:56:21 no-preload-336331 kubelet[716]: I1202 20:56:21.840847     716 scope.go:122] "RemoveContainer" containerID="a466626000385a894ca35d0cbbd705f0a7ea58df0bec6d3ee73e98444e45ee26"
	Dec 02 20:56:21 no-preload-336331 kubelet[716]: E1202 20:56:21.841061     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nh2q4_kubernetes-dashboard(3114ee57-4f0d-415c-8ca7-2fdbe67e1e5c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nh2q4" podUID="3114ee57-4f0d-415c-8ca7-2fdbe67e1e5c"
	Dec 02 20:56:28 no-preload-336331 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 20:56:28 no-preload-336331 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 20:56:28 no-preload-336331 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 20:56:28 no-preload-336331 systemd[1]: kubelet.service: Consumed 1.906s CPU time.
	
	
	==> kubernetes-dashboard [392425906cb8929a82cf3b6a301d75ecc8a3f2afb4aca218a52e369092d206a5] <==
	2025/12/02 20:55:51 Using namespace: kubernetes-dashboard
	2025/12/02 20:55:51 Using in-cluster config to connect to apiserver
	2025/12/02 20:55:51 Using secret token for csrf signing
	2025/12/02 20:55:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/02 20:55:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/02 20:55:51 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/02 20:55:51 Generating JWE encryption key
	2025/12/02 20:55:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/02 20:55:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/02 20:55:51 Initializing JWE encryption key from synchronized object
	2025/12/02 20:55:51 Creating in-cluster Sidecar client
	2025/12/02 20:55:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 20:55:51 Serving insecurely on HTTP port: 9090
	2025/12/02 20:56:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 20:55:51 Starting overwatch
	
	
	==> storage-provisioner [6a58ab9b9c0482bd5f103029c7c0f3bdb6d5c02e0fc49f59a43c1b17c375958e] <==
	I1202 20:56:10.861553       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 20:56:10.870848       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 20:56:10.870913       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1202 20:56:10.873854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:14.329161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:18.590289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:22.189346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:25.243477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:28.266308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:28.271132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 20:56:28.271288       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 20:56:28.271455       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3ea12a83-8249-476a-aff4-76a34b961543", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-336331_2dd48adb-dd9f-44e7-b07a-258cd92825d9 became leader
	I1202 20:56:28.271492       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-336331_2dd48adb-dd9f-44e7-b07a-258cd92825d9!
	W1202 20:56:28.273784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:28.278118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 20:56:28.371767       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-336331_2dd48adb-dd9f-44e7-b07a-258cd92825d9!
	W1202 20:56:30.281622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:30.287541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:32.291555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:32.295882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [80e9078d18ca5da464c12fd5d5b48d960e2c5867e07585bee911e523c9a0630a] <==
	I1202 20:55:40.115598       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1202 20:56:10.118971       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-336331 -n no-preload-336331
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-336331 -n no-preload-336331: exit status 2 (371.525643ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-336331 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-386191 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-386191 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (260.487352ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:56:43Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-386191 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-386191 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-386191 describe deploy/metrics-server -n kube-system: exit status 1 (61.259098ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-386191 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-386191
helpers_test.go:243: (dbg) docker inspect embed-certs-386191:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "59d0941ced13c8c23df4b796c1700e71da3dff2dc9f33015ee98bb7f9e84e6c9",
	        "Created": "2025-12-02T20:55:55.991908115Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 764045,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T20:55:56.039644291Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/59d0941ced13c8c23df4b796c1700e71da3dff2dc9f33015ee98bb7f9e84e6c9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/59d0941ced13c8c23df4b796c1700e71da3dff2dc9f33015ee98bb7f9e84e6c9/hostname",
	        "HostsPath": "/var/lib/docker/containers/59d0941ced13c8c23df4b796c1700e71da3dff2dc9f33015ee98bb7f9e84e6c9/hosts",
	        "LogPath": "/var/lib/docker/containers/59d0941ced13c8c23df4b796c1700e71da3dff2dc9f33015ee98bb7f9e84e6c9/59d0941ced13c8c23df4b796c1700e71da3dff2dc9f33015ee98bb7f9e84e6c9-json.log",
	        "Name": "/embed-certs-386191",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-386191:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-386191",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "59d0941ced13c8c23df4b796c1700e71da3dff2dc9f33015ee98bb7f9e84e6c9",
	                "LowerDir": "/var/lib/docker/overlay2/cd263fb850dea457d23961af62640291018121fa574740d96ea92fe99c9aa05c-init/diff:/var/lib/docker/overlay2/49bb44e987885c48600d5ae7ad3c81cad82a00f38070c0460882c5746d9fae59/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd263fb850dea457d23961af62640291018121fa574740d96ea92fe99c9aa05c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd263fb850dea457d23961af62640291018121fa574740d96ea92fe99c9aa05c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd263fb850dea457d23961af62640291018121fa574740d96ea92fe99c9aa05c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-386191",
	                "Source": "/var/lib/docker/volumes/embed-certs-386191/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-386191",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-386191",
	                "name.minikube.sigs.k8s.io": "embed-certs-386191",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b4c8c76f72dfd892e44821995d4be4d4eb78edc949dada280d9720988a1b6446",
	            "SandboxKey": "/var/run/docker/netns/b4c8c76f72df",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33513"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33514"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33517"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33515"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33516"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-386191": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "09e54ca661ff7c94761e454bfdeba97f3291ada6df7679173f7c9249a52d8235",
	                    "EndpointID": "0ccdf2382390aa8bf84b5f926420774d3705fc6f3782b87ca21e7e7c4bf149e8",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "fe:5a:fb:8c:61:37",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-386191",
	                        "59d0941ced13"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-386191 -n embed-certs-386191
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-386191 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-386191 logs -n 25: (1.05064534s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p newest-cni-245604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-997805 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ stop    │ -p newest-cni-245604 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ stop    │ -p default-k8s-diff-port-997805 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable dashboard -p newest-cni-245604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p newest-cni-245604 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable dashboard -p no-preload-336331 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p no-preload-336331 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:56 UTC │
	│ image   │ newest-cni-245604 image list --format=json                                                                                                                                                                                                           │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ pause   │ -p newest-cni-245604 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-997805 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p default-k8s-diff-port-997805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:56 UTC │
	│ delete  │ -p newest-cni-245604                                                                                                                                                                                                                                 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ delete  │ -p newest-cni-245604                                                                                                                                                                                                                                 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ delete  │ -p disable-driver-mounts-234978                                                                                                                                                                                                                      │ disable-driver-mounts-234978 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p embed-certs-386191 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:56 UTC │
	│ image   │ old-k8s-version-992336 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ pause   │ -p old-k8s-version-992336 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ delete  │ -p old-k8s-version-992336                                                                                                                                                                                                                            │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:56 UTC │
	│ delete  │ -p old-k8s-version-992336                                                                                                                                                                                                                            │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ image   │ no-preload-336331 image list --format=json                                                                                                                                                                                                           │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ pause   │ -p no-preload-336331 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │                     │
	│ delete  │ -p no-preload-336331                                                                                                                                                                                                                                 │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ delete  │ -p no-preload-336331                                                                                                                                                                                                                                 │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ addons  │ enable metrics-server -p embed-certs-386191 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:55:49
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:55:49.973376  761851 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:55:49.973479  761851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:55:49.973486  761851 out.go:374] Setting ErrFile to fd 2...
	I1202 20:55:49.973492  761851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:55:49.973784  761851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:55:49.974402  761851 out.go:368] Setting JSON to false
	I1202 20:55:49.976053  761851 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9494,"bootTime":1764699456,"procs":379,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:55:49.976153  761851 start.go:143] virtualization: kvm guest
	I1202 20:55:49.979903  761851 out.go:179] * [embed-certs-386191] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 20:55:49.981563  761851 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:55:49.981711  761851 notify.go:221] Checking for updates...
	I1202 20:55:49.985961  761851 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:55:49.989444  761851 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:49.990856  761851 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 20:55:49.992198  761851 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:55:49.994165  761851 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:55:49.996734  761851 config.go:182] Loaded profile config "default-k8s-diff-port-997805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:55:49.996944  761851 config.go:182] Loaded profile config "no-preload-336331": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:55:49.997173  761851 config.go:182] Loaded profile config "old-k8s-version-992336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1202 20:55:49.997373  761851 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:55:50.033364  761851 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 20:55:50.033467  761851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:55:50.114622  761851 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 20:55:50.101227741 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:55:50.114779  761851 docker.go:319] overlay module found
	I1202 20:55:50.117537  761851 out.go:179] * Using the docker driver based on user configuration
	I1202 20:55:50.119145  761851 start.go:309] selected driver: docker
	I1202 20:55:50.119167  761851 start.go:927] validating driver "docker" against <nil>
	I1202 20:55:50.119183  761851 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:55:50.120035  761851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:55:50.211212  761851 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 20:55:50.198488456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:55:50.211445  761851 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 20:55:50.211790  761851 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:55:50.214433  761851 out.go:179] * Using Docker driver with root privileges
	I1202 20:55:50.218243  761851 cni.go:84] Creating CNI manager for ""
	I1202 20:55:50.218353  761851 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:50.218375  761851 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 20:55:50.218508  761851 start.go:353] cluster config:
	{Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:55:50.220045  761851 out.go:179] * Starting "embed-certs-386191" primary control-plane node in "embed-certs-386191" cluster
	I1202 20:55:50.221707  761851 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 20:55:50.223105  761851 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 20:55:50.224334  761851 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:50.224383  761851 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 20:55:50.224379  761851 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 20:55:50.224423  761851 cache.go:65] Caching tarball of preloaded images
	I1202 20:55:50.224531  761851 preload.go:238] Found /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 20:55:50.224544  761851 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 20:55:50.224682  761851 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/config.json ...
	I1202 20:55:50.224706  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/config.json: {Name:mk4df57c1427e88de36c6d265cf4b7b9447ba4a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:50.254982  761851 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 20:55:50.255008  761851 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 20:55:50.255030  761851 cache.go:243] Successfully downloaded all kic artifacts
	I1202 20:55:50.255092  761851 start.go:360] acquireMachinesLock for embed-certs-386191: {Name:mk07b451c8d7193712ed79603183bf03b141f2ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:55:50.255209  761851 start.go:364] duration metric: took 90.207µs to acquireMachinesLock for "embed-certs-386191"
	I1202 20:55:50.255244  761851 start.go:93] Provisioning new machine with config: &{Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:55:50.255372  761851 start.go:125] createHost starting for "" (driver="docker")
	W1202 20:55:47.478474  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:55:49.480219  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:55:48.658867  759377 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:55:48.658893  759377 machine.go:97] duration metric: took 4.363922202s to provisionDockerMachine
	I1202 20:55:48.658908  759377 start.go:293] postStartSetup for "default-k8s-diff-port-997805" (driver="docker")
	I1202 20:55:48.659934  759377 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:55:48.660266  759377 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:55:48.660319  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:48.684270  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:48.800470  759377 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:55:48.806594  759377 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:55:48.806641  759377 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:55:48.806659  759377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 20:55:48.806723  759377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 20:55:48.806832  759377 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem -> 4110322.pem in /etc/ssl/certs
	I1202 20:55:48.807095  759377 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:55:48.817526  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:55:48.843728  759377 start.go:296] duration metric: took 183.799228ms for postStartSetup
	I1202 20:55:48.843844  759377 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:55:48.843886  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:48.867562  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:48.976679  759377 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:55:48.983737  759377 fix.go:56] duration metric: took 5.130755935s for fixHost
	I1202 20:55:48.983779  759377 start.go:83] releasing machines lock for "default-k8s-diff-port-997805", held for 5.130814844s
	I1202 20:55:48.983853  759377 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-997805
	I1202 20:55:49.008951  759377 ssh_runner.go:195] Run: cat /version.json
	I1202 20:55:49.009046  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:49.009048  759377 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:55:49.009136  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:49.034693  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:49.035313  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:49.217584  759377 ssh_runner.go:195] Run: systemctl --version
	I1202 20:55:49.226948  759377 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:55:49.280525  759377 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:55:49.287579  759377 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:55:49.287663  759377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:55:49.299593  759377 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 20:55:49.299624  759377 start.go:496] detecting cgroup driver to use...
	I1202 20:55:49.299667  759377 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 20:55:49.299717  759377 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:55:49.321346  759377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:55:49.340202  759377 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:55:49.340276  759377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:55:49.364580  759377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:55:49.384570  759377 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:55:49.507838  759377 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:55:49.636982  759377 docker.go:234] disabling docker service ...
	I1202 20:55:49.637124  759377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:55:49.660429  759377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:55:49.676580  759377 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:55:49.805919  759377 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:55:49.932552  759377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:55:49.950808  759377 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:55:49.973269  759377 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:55:49.973378  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:49.987382  759377 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 20:55:49.987446  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.001518  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.015622  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.029383  759377 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:55:50.042396  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.055622  759377 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.069706  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.082027  759377 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:55:50.093878  759377 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:55:50.106172  759377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:50.241651  759377 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:55:51.093615  759377 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:55:51.093712  759377 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:55:51.098803  759377 start.go:564] Will wait 60s for crictl version
	I1202 20:55:51.098893  759377 ssh_runner.go:195] Run: which crictl
	I1202 20:55:51.103616  759377 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:55:51.134275  759377 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:55:51.134365  759377 ssh_runner.go:195] Run: crio --version
	I1202 20:55:51.176508  759377 ssh_runner.go:195] Run: crio --version
	I1202 20:55:51.212619  759377 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 20:55:51.213954  759377 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-997805 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:55:51.239456  759377 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 20:55:51.247008  759377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:51.258836  759377 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-997805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-997805 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:55:51.259035  759377 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:51.259113  759377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:51.305184  759377 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:51.305211  759377 crio.go:433] Images already preloaded, skipping extraction
	I1202 20:55:51.305279  759377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:51.336679  759377 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:51.336721  759377 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:55:51.336736  759377 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1202 20:55:51.336850  759377 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-997805 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-997805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:55:51.336915  759377 ssh_runner.go:195] Run: crio config
	I1202 20:55:51.395485  759377 cni.go:84] Creating CNI manager for ""
	I1202 20:55:51.395526  759377 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:51.395553  759377 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:55:51.395590  759377 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-997805 NodeName:default-k8s-diff-port-997805 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:55:51.395786  759377 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-997805"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:55:51.395870  759377 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 20:55:51.406735  759377 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:55:51.406822  759377 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:55:51.416228  759377 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1202 20:55:51.430748  759377 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:55:51.448244  759377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1202 20:55:51.463482  759377 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 20:55:51.467906  759377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:51.480393  759377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:51.588830  759377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:55:51.618253  759377 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805 for IP: 192.168.85.2
	I1202 20:55:51.618282  759377 certs.go:195] generating shared ca certs ...
	I1202 20:55:51.618303  759377 certs.go:227] acquiring lock for ca certs: {Name:mkea11924193dd4dc459e16d70207bde383196df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:51.618470  759377 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key
	I1202 20:55:51.618534  759377 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key
	I1202 20:55:51.618547  759377 certs.go:257] generating profile certs ...
	I1202 20:55:51.618661  759377 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/client.key
	I1202 20:55:51.618759  759377 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/apiserver.key.36ffc693
	I1202 20:55:51.618817  759377 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/proxy-client.key
	I1202 20:55:51.618958  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem (1338 bytes)
	W1202 20:55:51.619000  759377 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032_empty.pem, impossibly tiny 0 bytes
	I1202 20:55:51.619010  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 20:55:51.619043  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:55:51.619087  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:55:51.619120  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem (1675 bytes)
	I1202 20:55:51.619173  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:55:51.619958  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:55:51.642775  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:55:51.668086  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:55:51.695111  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:55:51.723055  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 20:55:51.757108  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 20:55:51.782582  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:55:51.803028  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 20:55:51.823897  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:55:51.845621  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem --> /usr/share/ca-certificates/411032.pem (1338 bytes)
	I1202 20:55:51.866855  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /usr/share/ca-certificates/4110322.pem (1708 bytes)
	I1202 20:55:51.890515  759377 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:55:51.906355  759377 ssh_runner.go:195] Run: openssl version
	I1202 20:55:51.914259  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110322.pem && ln -fs /usr/share/ca-certificates/4110322.pem /etc/ssl/certs/4110322.pem"
	I1202 20:55:51.925148  759377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110322.pem
	I1202 20:55:51.929800  759377 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 20:13 /usr/share/ca-certificates/4110322.pem
	I1202 20:55:51.929869  759377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110322.pem
	I1202 20:55:51.972279  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4110322.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:55:51.983418  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:55:51.993784  759377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:51.999249  759377 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:51.999316  759377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:52.049373  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:55:52.061515  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/411032.pem && ln -fs /usr/share/ca-certificates/411032.pem /etc/ssl/certs/411032.pem"
	I1202 20:55:52.072126  759377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/411032.pem
	I1202 20:55:52.076862  759377 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 20:13 /usr/share/ca-certificates/411032.pem
	I1202 20:55:52.076956  759377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/411032.pem
	I1202 20:55:52.126642  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/411032.pem /etc/ssl/certs/51391683.0"
	I1202 20:55:52.138458  759377 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:55:52.143543  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 20:55:52.198225  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 20:55:52.254754  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 20:55:52.319722  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 20:55:52.380903  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 20:55:52.422910  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 20:55:52.483325  759377 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-997805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-997805 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:55:52.483438  759377 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:55:52.483499  759377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:55:52.522620  759377 cri.go:89] found id: "25e14e8feafb6c0d6c5261cd5e507b812e39fcb9c7e196408fe69d780ebbcd1d"
	I1202 20:55:52.522651  759377 cri.go:89] found id: "0c7e2844e2dbdbf5b9ffe8bf4e8d07304b64b059e3d4c965c2010c5d8a39c499"
	I1202 20:55:52.522657  759377 cri.go:89] found id: "81b0ec87511a05a7501d98eb27c52f69372a4b30c4ea523db262c140f9b68cd3"
	I1202 20:55:52.522662  759377 cri.go:89] found id: "e13e6c4d6c5da602ac2e1402a7612205c5a0ceffdccf7618da3035e562a7d9d3"
	I1202 20:55:52.522667  759377 cri.go:89] found id: ""
	I1202 20:55:52.522718  759377 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 20:55:52.539274  759377 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:55:52Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:55:52.539358  759377 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:55:52.550759  759377 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 20:55:52.550911  759377 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 20:55:52.550977  759377 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 20:55:52.562444  759377 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:55:52.563380  759377 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-997805" does not appear in /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:52.563867  759377 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-407427/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-997805" cluster setting kubeconfig missing "default-k8s-diff-port-997805" context setting]
	I1202 20:55:52.564708  759377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:52.567122  759377 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 20:55:52.580423  759377 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1202 20:55:52.580475  759377 kubeadm.go:602] duration metric: took 29.545337ms to restartPrimaryControlPlane
	I1202 20:55:52.580492  759377 kubeadm.go:403] duration metric: took 97.179033ms to StartCluster
	I1202 20:55:52.580515  759377 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:52.580624  759377 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:52.582395  759377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:52.582737  759377 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:55:52.582982  759377 config.go:182] Loaded profile config "default-k8s-diff-port-997805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:55:52.583044  759377 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:55:52.583145  759377 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:52.583167  759377 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-997805"
	W1202 20:55:52.583180  759377 addons.go:248] addon storage-provisioner should already be in state true
	I1202 20:55:52.583208  759377 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:52.583706  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.583924  759377 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:52.583949  759377 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-997805"
	W1202 20:55:52.583958  759377 addons.go:248] addon dashboard should already be in state true
	I1202 20:55:52.583987  759377 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:52.584470  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.584621  759377 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:52.584638  759377 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-997805"
	I1202 20:55:52.584909  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.590138  759377 out.go:179] * Verifying Kubernetes components...
	I1202 20:55:52.591985  759377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:52.621520  759377 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-997805"
	W1202 20:55:52.621550  759377 addons.go:248] addon default-storageclass should already be in state true
	I1202 20:55:52.621581  759377 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:52.621962  759377 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 20:55:52.621973  759377 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:55:52.622100  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.623522  759377 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:52.623542  759377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:55:52.623861  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:52.629794  759377 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 20:55:52.631326  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 20:55:52.631354  759377 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 20:55:52.631441  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:52.650454  759377 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:52.650440  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:52.650477  759377 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:55:52.650539  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:52.664697  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:52.687593  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:52.782783  759377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:55:52.788136  759377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:52.796186  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 20:55:52.796227  759377 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 20:55:52.805245  759377 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-997805" to be "Ready" ...
	I1202 20:55:52.813493  759377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:52.816061  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 20:55:52.816120  759377 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 20:55:52.836609  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 20:55:52.836641  759377 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 20:55:52.858664  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 20:55:52.858695  759377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 20:55:52.881817  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 20:55:52.881850  759377 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 20:55:52.898249  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 20:55:52.898282  759377 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 20:55:52.916317  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 20:55:52.916341  759377 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 20:55:52.934311  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 20:55:52.934421  759377 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 20:55:52.954130  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:55:52.954156  759377 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 20:55:52.971994  759377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:55:50.259730  761851 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1202 20:55:50.260957  761851 start.go:159] libmachine.API.Create for "embed-certs-386191" (driver="docker")
	I1202 20:55:50.261018  761851 client.go:173] LocalClient.Create starting
	I1202 20:55:50.261131  761851 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem
	I1202 20:55:50.261175  761851 main.go:143] libmachine: Decoding PEM data...
	I1202 20:55:50.261199  761851 main.go:143] libmachine: Parsing certificate...
	I1202 20:55:50.261293  761851 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem
	I1202 20:55:50.261321  761851 main.go:143] libmachine: Decoding PEM data...
	I1202 20:55:50.261336  761851 main.go:143] libmachine: Parsing certificate...
	I1202 20:55:50.261828  761851 cli_runner.go:164] Run: docker network inspect embed-certs-386191 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 20:55:50.287353  761851 cli_runner.go:211] docker network inspect embed-certs-386191 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 20:55:50.287436  761851 network_create.go:284] running [docker network inspect embed-certs-386191] to gather additional debugging logs...
	I1202 20:55:50.287467  761851 cli_runner.go:164] Run: docker network inspect embed-certs-386191
	W1202 20:55:50.313420  761851 cli_runner.go:211] docker network inspect embed-certs-386191 returned with exit code 1
	I1202 20:55:50.313458  761851 network_create.go:287] error running [docker network inspect embed-certs-386191]: docker network inspect embed-certs-386191: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-386191 not found
	I1202 20:55:50.313493  761851 network_create.go:289] output of [docker network inspect embed-certs-386191]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-386191 not found
	
	** /stderr **
	I1202 20:55:50.313695  761851 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:55:50.339597  761851 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-acf081edf266 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:04:c0:60:47:62} reservation:<nil>}
	I1202 20:55:50.340759  761851 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9623a21fb225 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:fc:8b:40:15:1b} reservation:<nil>}
	I1202 20:55:50.341559  761851 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f2b79e7e26a5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:c7:f4:38:1c:32} reservation:<nil>}
	I1202 20:55:50.342581  761851 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-be4fb772701b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:87:5f:38:96:b7} reservation:<nil>}
	I1202 20:55:50.343861  761851 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-13fe483902b9 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:a2:a4:21:b2:62:5a} reservation:<nil>}
	I1202 20:55:50.344785  761851 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-65ab470fa0e2 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:16:23:28:7c:c5:24} reservation:<nil>}
	I1202 20:55:50.346012  761851 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fb66a0}
	I1202 20:55:50.346044  761851 network_create.go:124] attempt to create docker network embed-certs-386191 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1202 20:55:50.346142  761851 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-386191 embed-certs-386191
	I1202 20:55:50.449757  761851 network_create.go:108] docker network embed-certs-386191 192.168.103.0/24 created
	I1202 20:55:50.449812  761851 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-386191" container
	I1202 20:55:50.449912  761851 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 20:55:50.476319  761851 cli_runner.go:164] Run: docker volume create embed-certs-386191 --label name.minikube.sigs.k8s.io=embed-certs-386191 --label created_by.minikube.sigs.k8s.io=true
	I1202 20:55:50.544287  761851 oci.go:103] Successfully created a docker volume embed-certs-386191
	I1202 20:55:50.544384  761851 cli_runner.go:164] Run: docker run --rm --name embed-certs-386191-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-386191 --entrypoint /usr/bin/test -v embed-certs-386191:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 20:55:51.390297  761851 oci.go:107] Successfully prepared a docker volume embed-certs-386191
	I1202 20:55:51.390398  761851 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:51.390416  761851 kic.go:194] Starting extracting preloaded images to volume ...
	I1202 20:55:51.390490  761851 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-386191:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	W1202 20:55:51.979014  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:55:54.048006  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:55:54.222552  759377 node_ready.go:49] node "default-k8s-diff-port-997805" is "Ready"
	I1202 20:55:54.222597  759377 node_ready.go:38] duration metric: took 1.417304277s for node "default-k8s-diff-port-997805" to be "Ready" ...
	I1202 20:55:54.222616  759377 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:55:54.222680  759377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:55:55.521273  759377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.733090646s)
	I1202 20:55:55.521348  759377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.707827699s)
	I1202 20:55:55.956240  759377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.984189677s)
	I1202 20:55:55.956260  759377 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.733551247s)
	I1202 20:55:55.956296  759377 api_server.go:72] duration metric: took 3.373517458s to wait for apiserver process to appear ...
	I1202 20:55:55.956305  759377 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:55:55.956329  759377 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1202 20:55:55.957591  759377 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-997805 addons enable metrics-server
	
	I1202 20:55:55.960080  759377 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1202 20:55:55.961425  759377 addons.go:530] duration metric: took 3.378380909s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1202 20:55:55.963108  759377 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 20:55:55.963149  759377 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 20:55:56.456815  759377 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1202 20:55:56.464867  759377 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1202 20:55:56.466374  759377 api_server.go:141] control plane version: v1.34.2
	I1202 20:55:56.466405  759377 api_server.go:131] duration metric: took 510.092ms to wait for apiserver health ...
	I1202 20:55:56.466417  759377 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:55:56.470286  759377 system_pods.go:59] 8 kube-system pods found
	I1202 20:55:56.470321  759377 system_pods.go:61] "coredns-66bc5c9577-jrln7" [37de7399-6357-4f08-9240-fc9e0d884f47] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:55:56.470336  759377 system_pods.go:61] "etcd-default-k8s-diff-port-997805" [446321a2-6abf-495a-8198-da2e43d8c18d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:55:56.470354  759377 system_pods.go:61] "kindnet-rzqpn" [eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 20:55:56.470364  759377 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-997805" [04c7ea7b-9b4e-44ee-8148-107daccf6b7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:55:56.470376  759377 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-997805" [65af3257-fa45-4d8c-bab3-46c0390ab8df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:55:56.470395  759377 system_pods.go:61] "kube-proxy-s2jpn" [407f6b3c-8d8b-47b0-b994-c061eedc6420] Running
	I1202 20:55:56.470403  759377 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-997805" [803a952f-54ba-4326-b70f-907fc7db368e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:55:56.470411  759377 system_pods.go:61] "storage-provisioner" [08893b97-1192-4f4e-8636-8f2ba82c853d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:55:56.470419  759377 system_pods.go:74] duration metric: took 3.994668ms to wait for pod list to return data ...
	I1202 20:55:56.470434  759377 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:55:56.472796  759377 default_sa.go:45] found service account: "default"
	I1202 20:55:56.472821  759377 default_sa.go:55] duration metric: took 2.376879ms for default service account to be created ...
	I1202 20:55:56.472832  759377 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 20:55:56.476530  759377 system_pods.go:86] 8 kube-system pods found
	I1202 20:55:56.476568  759377 system_pods.go:89] "coredns-66bc5c9577-jrln7" [37de7399-6357-4f08-9240-fc9e0d884f47] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:55:56.476586  759377 system_pods.go:89] "etcd-default-k8s-diff-port-997805" [446321a2-6abf-495a-8198-da2e43d8c18d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:55:56.476598  759377 system_pods.go:89] "kindnet-rzqpn" [eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 20:55:56.476611  759377 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-997805" [04c7ea7b-9b4e-44ee-8148-107daccf6b7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:55:56.476622  759377 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-997805" [65af3257-fa45-4d8c-bab3-46c0390ab8df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:55:56.476636  759377 system_pods.go:89] "kube-proxy-s2jpn" [407f6b3c-8d8b-47b0-b994-c061eedc6420] Running
	I1202 20:55:56.476644  759377 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-997805" [803a952f-54ba-4326-b70f-907fc7db368e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:55:56.476652  759377 system_pods.go:89] "storage-provisioner" [08893b97-1192-4f4e-8636-8f2ba82c853d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:55:56.476666  759377 system_pods.go:126] duration metric: took 3.826088ms to wait for k8s-apps to be running ...
	I1202 20:55:56.476679  759377 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 20:55:56.476731  759377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:55:56.496595  759377 system_svc.go:56] duration metric: took 19.904103ms WaitForService to wait for kubelet
	I1202 20:55:56.496628  759377 kubeadm.go:587] duration metric: took 3.913848958s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:55:56.496651  759377 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:55:56.501320  759377 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 20:55:56.501357  759377 node_conditions.go:123] node cpu capacity is 8
	I1202 20:55:56.501378  759377 node_conditions.go:105] duration metric: took 4.719966ms to run NodePressure ...
	I1202 20:55:56.501394  759377 start.go:242] waiting for startup goroutines ...
	I1202 20:55:56.501406  759377 start.go:247] waiting for cluster config update ...
	I1202 20:55:56.501422  759377 start.go:256] writing updated cluster config ...
	I1202 20:55:56.501764  759377 ssh_runner.go:195] Run: rm -f paused
	I1202 20:55:56.507506  759377 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:55:56.511978  759377 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jrln7" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 20:55:58.518638  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:55:55.882395  761851 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-386191:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.491855191s)
	I1202 20:55:55.882432  761851 kic.go:203] duration metric: took 4.49201135s to extract preloaded images to volume ...
	W1202 20:55:55.882649  761851 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1202 20:55:55.882730  761851 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1202 20:55:55.882796  761851 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 20:55:55.970786  761851 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-386191 --name embed-certs-386191 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-386191 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-386191 --network embed-certs-386191 --ip 192.168.103.2 --volume embed-certs-386191:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 20:55:56.322797  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Running}}
	I1202 20:55:56.346318  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:55:56.369508  761851 cli_runner.go:164] Run: docker exec embed-certs-386191 stat /var/lib/dpkg/alternatives/iptables
	I1202 20:55:56.426161  761851 oci.go:144] the created container "embed-certs-386191" has a running status.
	I1202 20:55:56.426198  761851 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa...
	I1202 20:55:56.605690  761851 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 20:55:56.639247  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:55:56.661049  761851 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 20:55:56.661086  761851 kic_runner.go:114] Args: [docker exec --privileged embed-certs-386191 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 20:55:56.743919  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:55:56.771200  761851 machine.go:94] provisionDockerMachine start ...
	I1202 20:55:56.771338  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:56.796209  761851 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:56.796568  761851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33513 <nil> <nil>}
	I1202 20:55:56.796593  761851 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:55:56.950615  761851 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-386191
	
	I1202 20:55:56.950657  761851 ubuntu.go:182] provisioning hostname "embed-certs-386191"
	I1202 20:55:56.950733  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:56.973211  761851 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:56.973537  761851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33513 <nil> <nil>}
	I1202 20:55:56.973561  761851 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-386191 && echo "embed-certs-386191" | sudo tee /etc/hostname
	I1202 20:55:57.141391  761851 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-386191
	
	I1202 20:55:57.141500  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:57.162911  761851 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:57.163198  761851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33513 <nil> <nil>}
	I1202 20:55:57.163228  761851 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-386191' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-386191/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-386191' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:55:57.310513  761851 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:55:57.310553  761851 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-407427/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-407427/.minikube}
	I1202 20:55:57.310589  761851 ubuntu.go:190] setting up certificates
	I1202 20:55:57.310609  761851 provision.go:84] configureAuth start
	I1202 20:55:57.310699  761851 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-386191
	I1202 20:55:57.331293  761851 provision.go:143] copyHostCerts
	I1202 20:55:57.331361  761851 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem, removing ...
	I1202 20:55:57.331377  761851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem
	I1202 20:55:57.331457  761851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem (1123 bytes)
	I1202 20:55:57.331608  761851 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem, removing ...
	I1202 20:55:57.331619  761851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem
	I1202 20:55:57.331661  761851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem (1675 bytes)
	I1202 20:55:57.331806  761851 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem, removing ...
	I1202 20:55:57.331820  761851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem
	I1202 20:55:57.331861  761851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem (1082 bytes)
	I1202 20:55:57.331969  761851 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem org=jenkins.embed-certs-386191 san=[127.0.0.1 192.168.103.2 embed-certs-386191 localhost minikube]
	I1202 20:55:57.478343  761851 provision.go:177] copyRemoteCerts
	I1202 20:55:57.478412  761851 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:55:57.478461  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:57.503684  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:55:57.613653  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:55:57.638025  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1202 20:55:57.660295  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 20:55:57.684474  761851 provision.go:87] duration metric: took 373.842939ms to configureAuth
	I1202 20:55:57.684512  761851 ubuntu.go:206] setting minikube options for container-runtime
	I1202 20:55:57.684722  761851 config.go:182] Loaded profile config "embed-certs-386191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:55:57.684859  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:57.705791  761851 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:57.706104  761851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33513 <nil> <nil>}
	I1202 20:55:57.706127  761851 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:55:58.017837  761851 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:55:58.017867  761851 machine.go:97] duration metric: took 1.246644154s to provisionDockerMachine
	I1202 20:55:58.017881  761851 client.go:176] duration metric: took 7.756854866s to LocalClient.Create
	I1202 20:55:58.017904  761851 start.go:167] duration metric: took 7.756953433s to libmachine.API.Create "embed-certs-386191"
	I1202 20:55:58.017914  761851 start.go:293] postStartSetup for "embed-certs-386191" (driver="docker")
	I1202 20:55:58.017926  761851 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:55:58.017993  761851 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:55:58.018051  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:58.040966  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:55:58.164646  761851 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:55:58.169173  761851 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:55:58.169218  761851 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:55:58.169234  761851 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 20:55:58.169292  761851 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 20:55:58.169398  761851 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem -> 4110322.pem in /etc/ssl/certs
	I1202 20:55:58.169534  761851 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:55:58.178343  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:55:58.201537  761851 start.go:296] duration metric: took 183.605841ms for postStartSetup
	I1202 20:55:58.201980  761851 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-386191
	I1202 20:55:58.222381  761851 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/config.json ...
	I1202 20:55:58.222725  761851 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:55:58.222779  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:58.246974  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:55:58.349308  761851 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:55:58.354335  761851 start.go:128] duration metric: took 8.098942472s to createHost
	I1202 20:55:58.354367  761851 start.go:83] releasing machines lock for "embed-certs-386191", held for 8.099141281s
	I1202 20:55:58.354452  761851 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-386191
	I1202 20:55:58.375692  761851 ssh_runner.go:195] Run: cat /version.json
	I1202 20:55:58.375743  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:58.375778  761851 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:55:58.375875  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:58.399444  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:55:58.401096  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:55:58.567709  761851 ssh_runner.go:195] Run: systemctl --version
	I1202 20:55:58.576291  761851 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:55:58.616262  761851 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:55:58.621961  761851 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:55:58.622044  761851 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:55:58.651183  761851 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 20:55:58.651217  761851 start.go:496] detecting cgroup driver to use...
	I1202 20:55:58.651265  761851 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 20:55:58.651331  761851 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:55:58.670441  761851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:55:58.684478  761851 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:55:58.684542  761851 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:55:58.704480  761851 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:55:58.725624  761851 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:55:58.831744  761851 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:55:58.927526  761851 docker.go:234] disabling docker service ...
	I1202 20:55:58.927588  761851 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:55:58.947085  761851 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:55:58.961716  761851 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:55:59.059830  761851 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:55:59.155836  761851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:55:59.170575  761851 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:55:59.187647  761851 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:55:59.187711  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.199691  761851 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 20:55:59.199752  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.210377  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.221666  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.233039  761851 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:55:59.242836  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.252564  761851 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.268580  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.279302  761851 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:55:59.288550  761851 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:55:59.297166  761851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:59.384478  761851 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:55:59.534012  761851 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:55:59.534100  761851 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:55:59.538865  761851 start.go:564] Will wait 60s for crictl version
	I1202 20:55:59.538929  761851 ssh_runner.go:195] Run: which crictl
	I1202 20:55:59.542822  761851 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:55:59.570175  761851 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:55:59.570275  761851 ssh_runner.go:195] Run: crio --version
	I1202 20:55:59.600365  761851 ssh_runner.go:195] Run: crio --version
	I1202 20:55:59.632281  761851 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 20:55:59.633569  761851 cli_runner.go:164] Run: docker network inspect embed-certs-386191 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:55:59.653989  761851 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1202 20:55:59.659705  761851 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:59.673939  761851 kubeadm.go:884] updating cluster {Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:55:59.674148  761851 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:59.674231  761851 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:59.721572  761851 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:59.721623  761851 crio.go:433] Images already preloaded, skipping extraction
	I1202 20:55:59.721807  761851 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:59.763726  761851 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:59.763753  761851 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:55:59.763763  761851 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1202 20:55:59.763877  761851 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-386191 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:55:59.763974  761851 ssh_runner.go:195] Run: crio config
	I1202 20:55:59.830764  761851 cni.go:84] Creating CNI manager for ""
	I1202 20:55:59.830790  761851 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:59.830809  761851 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:55:59.830832  761851 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-386191 NodeName:embed-certs-386191 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:55:59.830950  761851 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-386191"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:55:59.831035  761851 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 20:55:59.841880  761851 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:55:59.841954  761851 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:55:59.852027  761851 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1202 20:55:59.869099  761851 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:55:59.889821  761851 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1202 20:55:59.907811  761851 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1202 20:55:59.913347  761851 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:59.927373  761851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W1202 20:55:56.478639  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:55:58.978346  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:56:00.050556  761851 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:56:00.077300  761851 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191 for IP: 192.168.103.2
	I1202 20:56:00.077325  761851 certs.go:195] generating shared ca certs ...
	I1202 20:56:00.077348  761851 certs.go:227] acquiring lock for ca certs: {Name:mkea11924193dd4dc459e16d70207bde383196df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.077530  761851 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key
	I1202 20:56:00.077575  761851 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key
	I1202 20:56:00.077588  761851 certs.go:257] generating profile certs ...
	I1202 20:56:00.077664  761851 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.key
	I1202 20:56:00.077682  761851 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.crt with IP's: []
	I1202 20:56:00.252632  761851 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.crt ...
	I1202 20:56:00.252663  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.crt: {Name:mk9d10e4646efb676095250174819771b143a8ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.252877  761851 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.key ...
	I1202 20:56:00.252896  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.key: {Name:mk09798c33ea1ea9f8eb08ebf47349e244c0760e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.253023  761851 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key.1b423d29
	I1202 20:56:00.253048  761851 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt.1b423d29 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1202 20:56:00.432017  761851 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt.1b423d29 ...
	I1202 20:56:00.432052  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt.1b423d29: {Name:mk6d91134ec48be46c0e886b478e71e1794c3cdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.432278  761851 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key.1b423d29 ...
	I1202 20:56:00.432302  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key.1b423d29: {Name:mk97fa0403fe534a503bf999364704991b597622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.432413  761851 certs.go:382] copying /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt.1b423d29 -> /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt
	I1202 20:56:00.432512  761851 certs.go:386] copying /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key.1b423d29 -> /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key
	I1202 20:56:00.432593  761851 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.key
	I1202 20:56:00.432619  761851 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.crt with IP's: []
	I1202 20:56:00.527766  761851 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.crt ...
	I1202 20:56:00.527802  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.crt: {Name:mke9848302a1327d00a26fb35bc8d56284a1ca08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.528029  761851 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.key ...
	I1202 20:56:00.528053  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.key: {Name:mk5b412430aa6855d80ede6a2641ba2256c9a484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.528324  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem (1338 bytes)
	W1202 20:56:00.528374  761851 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032_empty.pem, impossibly tiny 0 bytes
	I1202 20:56:00.528390  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 20:56:00.528423  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:56:00.528455  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:56:00.528493  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem (1675 bytes)
	I1202 20:56:00.528552  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:56:00.529432  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:56:00.554691  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:56:00.580499  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:56:00.606002  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:56:00.630389  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1202 20:56:00.655553  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 20:56:00.679419  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:56:00.704325  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 20:56:00.729255  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /usr/share/ca-certificates/4110322.pem (1708 bytes)
	I1202 20:56:00.757910  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:56:00.782959  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem --> /usr/share/ca-certificates/411032.pem (1338 bytes)
	I1202 20:56:00.808564  761851 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:56:00.828291  761851 ssh_runner.go:195] Run: openssl version
	I1202 20:56:00.836796  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110322.pem && ln -fs /usr/share/ca-certificates/4110322.pem /etc/ssl/certs/4110322.pem"
	I1202 20:56:00.848469  761851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110322.pem
	I1202 20:56:00.853715  761851 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 20:13 /usr/share/ca-certificates/4110322.pem
	I1202 20:56:00.853790  761851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110322.pem
	I1202 20:56:00.905576  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4110322.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:56:00.918463  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:56:00.930339  761851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:56:00.935452  761851 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:56:00.935522  761851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:56:00.990051  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:56:01.002960  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/411032.pem && ln -fs /usr/share/ca-certificates/411032.pem /etc/ssl/certs/411032.pem"
	I1202 20:56:01.013994  761851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/411032.pem
	I1202 20:56:01.019737  761851 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 20:13 /usr/share/ca-certificates/411032.pem
	I1202 20:56:01.019798  761851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/411032.pem
	I1202 20:56:01.062700  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/411032.pem /etc/ssl/certs/51391683.0"
	I1202 20:56:01.074487  761851 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:56:01.079958  761851 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 20:56:01.080033  761851 kubeadm.go:401] StartCluster: {Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:56:01.080164  761851 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:56:01.080231  761851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:56:01.119713  761851 cri.go:89] found id: ""
	I1202 20:56:01.122354  761851 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:56:01.160024  761851 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 20:56:01.174466  761851 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 20:56:01.174517  761851 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 20:56:01.186198  761851 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 20:56:01.186294  761851 kubeadm.go:158] found existing configuration files:
	
	I1202 20:56:01.186361  761851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 20:56:01.201548  761851 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 20:56:01.201623  761851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 20:56:01.214153  761851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 20:56:01.225107  761851 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 20:56:01.225225  761851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 20:56:01.236050  761851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 20:56:01.247714  761851 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 20:56:01.247785  761851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 20:56:01.259129  761851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 20:56:01.270914  761851 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 20:56:01.270981  761851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 20:56:01.283320  761851 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 20:56:01.344042  761851 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1202 20:56:01.344150  761851 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 20:56:01.374696  761851 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 20:56:01.374786  761851 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1202 20:56:01.374832  761851 kubeadm.go:319] OS: Linux
	I1202 20:56:01.374904  761851 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 20:56:01.374965  761851 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 20:56:01.375027  761851 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 20:56:01.375100  761851 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 20:56:01.375165  761851 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 20:56:01.375227  761851 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 20:56:01.375295  761851 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 20:56:01.375351  761851 kubeadm.go:319] CGROUPS_IO: enabled
	I1202 20:56:01.461671  761851 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 20:56:01.461847  761851 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 20:56:01.462101  761851 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 20:56:01.473475  761851 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1202 20:56:00.519234  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:03.019288  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:56:01.478718  761851 out.go:252]   - Generating certificates and keys ...
	I1202 20:56:01.478829  761851 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 20:56:01.478911  761851 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 20:56:01.668758  761851 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 20:56:01.829895  761851 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 20:56:02.005376  761851 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 20:56:02.862909  761851 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 20:56:03.307052  761851 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 20:56:03.307703  761851 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-386191 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1202 20:56:03.383959  761851 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 20:56:03.384496  761851 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-386191 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1202 20:56:03.508307  761851 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 20:56:04.670556  761851 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 20:56:04.823930  761851 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 20:56:04.824007  761851 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	W1202 20:56:00.979309  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:56:02.980313  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:56:05.478729  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:56:05.205466  761851 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 20:56:05.375427  761851 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 20:56:05.434193  761851 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 20:56:05.863197  761851 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 20:56:06.053990  761851 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 20:56:06.054504  761851 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 20:56:06.058651  761851 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1202 20:56:05.517785  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:07.518439  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:56:06.060126  761851 out.go:252]   - Booting up control plane ...
	I1202 20:56:06.060244  761851 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 20:56:06.060364  761851 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 20:56:06.061268  761851 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 20:56:06.095037  761851 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 20:56:06.095189  761851 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 20:56:06.102515  761851 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 20:56:06.102696  761851 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 20:56:06.102769  761851 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 20:56:06.205490  761851 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 20:56:06.205715  761851 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 20:56:07.205674  761851 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001810301s
	I1202 20:56:07.209848  761851 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 20:56:07.210052  761851 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1202 20:56:07.210217  761851 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 20:56:07.210338  761851 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1202 20:56:08.756010  761851 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.546069674s
	I1202 20:56:09.869674  761851 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.659323153s
	W1202 20:56:07.979740  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:56:10.478689  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:56:11.711917  761851 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502061899s
	I1202 20:56:11.728157  761851 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 20:56:11.740906  761851 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 20:56:11.753231  761851 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 20:56:11.753530  761851 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-386191 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 20:56:11.764705  761851 kubeadm.go:319] [bootstrap-token] Using token: c8uju2.57r80hlp0isn29k2
	I1202 20:56:11.766183  761851 out.go:252]   - Configuring RBAC rules ...
	I1202 20:56:11.766339  761851 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 20:56:11.770506  761851 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 20:56:11.777525  761851 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 20:56:11.780772  761851 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 20:56:11.785459  761851 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 20:56:11.788963  761851 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 20:56:12.119080  761851 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 20:56:12.539952  761851 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 20:56:13.118875  761851 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 20:56:13.119856  761851 kubeadm.go:319] 
	I1202 20:56:13.119972  761851 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 20:56:13.119991  761851 kubeadm.go:319] 
	I1202 20:56:13.120096  761851 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 20:56:13.120106  761851 kubeadm.go:319] 
	I1202 20:56:13.120132  761851 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 20:56:13.120189  761851 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 20:56:13.120239  761851 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 20:56:13.120250  761851 kubeadm.go:319] 
	I1202 20:56:13.120296  761851 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 20:56:13.120303  761851 kubeadm.go:319] 
	I1202 20:56:13.120350  761851 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 20:56:13.120356  761851 kubeadm.go:319] 
	I1202 20:56:13.120405  761851 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 20:56:13.120480  761851 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 20:56:13.120550  761851 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 20:56:13.120559  761851 kubeadm.go:319] 
	I1202 20:56:13.120655  761851 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 20:56:13.120760  761851 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 20:56:13.120770  761851 kubeadm.go:319] 
	I1202 20:56:13.120947  761851 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token c8uju2.57r80hlp0isn29k2 \
	I1202 20:56:13.121116  761851 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:48349ffa2369b87cd15480ece56edb1c5c5d97a828930f9dfbf1d1ceccf80ff4 \
	I1202 20:56:13.121150  761851 kubeadm.go:319] 	--control-plane 
	I1202 20:56:13.121158  761851 kubeadm.go:319] 
	I1202 20:56:13.121277  761851 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 20:56:13.121292  761851 kubeadm.go:319] 
	I1202 20:56:13.121403  761851 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token c8uju2.57r80hlp0isn29k2 \
	I1202 20:56:13.121546  761851 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:48349ffa2369b87cd15480ece56edb1c5c5d97a828930f9dfbf1d1ceccf80ff4 
	I1202 20:56:13.124563  761851 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1202 20:56:13.124664  761851 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 20:56:13.124688  761851 cni.go:84] Creating CNI manager for ""
	I1202 20:56:13.124700  761851 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:56:13.126500  761851 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1202 20:56:10.017702  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:12.018270  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:56:13.128206  761851 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 20:56:13.133011  761851 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1202 20:56:13.133036  761851 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 20:56:13.147210  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 20:56:13.367880  761851 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 20:56:13.368008  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:13.368037  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-386191 minikube.k8s.io/updated_at=2025_12_02T20_56_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92 minikube.k8s.io/name=embed-certs-386191 minikube.k8s.io/primary=true
	I1202 20:56:13.378170  761851 ops.go:34] apiserver oom_adj: -16
	I1202 20:56:13.456213  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:13.956791  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:14.456911  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:14.957002  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1202 20:56:12.481885  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:56:14.478647  754876 pod_ready.go:94] pod "coredns-7d764666f9-ghxk6" is "Ready"
	I1202 20:56:14.478679  754876 pod_ready.go:86] duration metric: took 33.50633852s for pod "coredns-7d764666f9-ghxk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.481510  754876 pod_ready.go:83] waiting for pod "etcd-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.487252  754876 pod_ready.go:94] pod "etcd-no-preload-336331" is "Ready"
	I1202 20:56:14.487284  754876 pod_ready.go:86] duration metric: took 5.742661ms for pod "etcd-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.489709  754876 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.493975  754876 pod_ready.go:94] pod "kube-apiserver-no-preload-336331" is "Ready"
	I1202 20:56:14.494030  754876 pod_ready.go:86] duration metric: took 4.293005ms for pod "kube-apiserver-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.496555  754876 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.676017  754876 pod_ready.go:94] pod "kube-controller-manager-no-preload-336331" is "Ready"
	I1202 20:56:14.676054  754876 pod_ready.go:86] duration metric: took 179.468852ms for pod "kube-controller-manager-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.876507  754876 pod_ready.go:83] waiting for pod "kube-proxy-qc2v9" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:15.276156  754876 pod_ready.go:94] pod "kube-proxy-qc2v9" is "Ready"
	I1202 20:56:15.276184  754876 pod_ready.go:86] duration metric: took 399.652639ms for pod "kube-proxy-qc2v9" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:15.476929  754876 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:15.876785  754876 pod_ready.go:94] pod "kube-scheduler-no-preload-336331" is "Ready"
	I1202 20:56:15.876821  754876 pod_ready.go:86] duration metric: took 399.859554ms for pod "kube-scheduler-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:15.876837  754876 pod_ready.go:40] duration metric: took 34.909444308s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:56:15.923408  754876 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1202 20:56:15.925124  754876 out.go:179] * Done! kubectl is now configured to use "no-preload-336331" cluster and "default" namespace by default
	I1202 20:56:15.457186  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:15.957341  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:16.456356  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:16.956786  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:17.457273  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:17.529683  761851 kubeadm.go:1114] duration metric: took 4.161789754s to wait for elevateKubeSystemPrivileges
	I1202 20:56:17.529733  761851 kubeadm.go:403] duration metric: took 16.449707403s to StartCluster
	I1202 20:56:17.529758  761851 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:17.529828  761851 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:56:17.531386  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:17.531613  761851 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 20:56:17.531617  761851 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:56:17.531699  761851 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:56:17.531801  761851 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-386191"
	I1202 20:56:17.531817  761851 config.go:182] Loaded profile config "embed-certs-386191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:56:17.531839  761851 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-386191"
	I1202 20:56:17.531817  761851 addons.go:70] Setting default-storageclass=true in profile "embed-certs-386191"
	I1202 20:56:17.531877  761851 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-386191"
	I1202 20:56:17.531882  761851 host.go:66] Checking if "embed-certs-386191" exists ...
	I1202 20:56:17.532342  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:56:17.532507  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:56:17.534531  761851 out.go:179] * Verifying Kubernetes components...
	I1202 20:56:17.535950  761851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:56:17.558800  761851 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:56:17.560025  761851 addons.go:239] Setting addon default-storageclass=true in "embed-certs-386191"
	I1202 20:56:17.560084  761851 host.go:66] Checking if "embed-certs-386191" exists ...
	I1202 20:56:17.560580  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:56:17.561225  761851 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:56:17.561246  761851 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:56:17.561324  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:56:17.590711  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:56:17.592956  761851 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:56:17.592992  761851 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:56:17.593051  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:56:17.617931  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:56:17.638614  761851 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 20:56:17.681673  761851 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:56:17.712144  761851 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:56:17.735866  761851 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:56:17.815035  761851 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1202 20:56:17.816483  761851 node_ready.go:35] waiting up to 6m0s for node "embed-certs-386191" to be "Ready" ...
	I1202 20:56:18.003767  761851 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1202 20:56:14.018515  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:16.020009  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:18.517905  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:56:18.004793  761851 addons.go:530] duration metric: took 473.08842ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1202 20:56:18.319554  761851 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-386191" context rescaled to 1 replicas
	W1202 20:56:19.820111  761851 node_ready.go:57] node "embed-certs-386191" has "Ready":"False" status (will retry)
	W1202 20:56:21.019501  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:23.518373  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:22.320036  761851 node_ready.go:57] node "embed-certs-386191" has "Ready":"False" status (will retry)
	W1202 20:56:24.320559  761851 node_ready.go:57] node "embed-certs-386191" has "Ready":"False" status (will retry)
	W1202 20:56:26.018767  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:28.019223  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:26.320730  761851 node_ready.go:57] node "embed-certs-386191" has "Ready":"False" status (will retry)
	W1202 20:56:28.820145  761851 node_ready.go:57] node "embed-certs-386191" has "Ready":"False" status (will retry)
	W1202 20:56:30.519140  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:56:32.019528  759377 pod_ready.go:94] pod "coredns-66bc5c9577-jrln7" is "Ready"
	I1202 20:56:32.019562  759377 pod_ready.go:86] duration metric: took 35.507552593s for pod "coredns-66bc5c9577-jrln7" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.022973  759377 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.027973  759377 pod_ready.go:94] pod "etcd-default-k8s-diff-port-997805" is "Ready"
	I1202 20:56:32.028009  759377 pod_ready.go:86] duration metric: took 5.002878ms for pod "etcd-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.030436  759377 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.035486  759377 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-997805" is "Ready"
	I1202 20:56:32.035517  759377 pod_ready.go:86] duration metric: took 5.054721ms for pod "kube-apiserver-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.038168  759377 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.216544  759377 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-997805" is "Ready"
	I1202 20:56:32.216573  759377 pod_ready.go:86] duration metric: took 178.377154ms for pod "kube-controller-manager-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.417009  759377 pod_ready.go:83] waiting for pod "kube-proxy-s2jpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.816568  759377 pod_ready.go:94] pod "kube-proxy-s2jpn" is "Ready"
	I1202 20:56:32.816591  759377 pod_ready.go:86] duration metric: took 399.551658ms for pod "kube-proxy-s2jpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:33.016734  759377 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:33.415885  759377 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-997805" is "Ready"
	I1202 20:56:33.415912  759377 pod_ready.go:86] duration metric: took 399.150299ms for pod "kube-scheduler-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:33.415928  759377 pod_ready.go:40] duration metric: took 36.908377916s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:56:33.462852  759377 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 20:56:33.464589  759377 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-997805" cluster and "default" namespace by default
	I1202 20:56:30.319943  761851 node_ready.go:49] node "embed-certs-386191" is "Ready"
	I1202 20:56:30.319978  761851 node_ready.go:38] duration metric: took 12.503459453s for node "embed-certs-386191" to be "Ready" ...
	I1202 20:56:30.319996  761851 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:56:30.320050  761851 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:56:30.333122  761851 api_server.go:72] duration metric: took 12.801460339s to wait for apiserver process to appear ...
	I1202 20:56:30.333155  761851 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:56:30.333181  761851 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 20:56:30.338949  761851 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1202 20:56:30.340352  761851 api_server.go:141] control plane version: v1.34.2
	I1202 20:56:30.340387  761851 api_server.go:131] duration metric: took 7.223849ms to wait for apiserver health ...
	I1202 20:56:30.340400  761851 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:56:30.345084  761851 system_pods.go:59] 8 kube-system pods found
	I1202 20:56:30.345142  761851 system_pods.go:61] "coredns-66bc5c9577-q6l9x" [e7159eb1-3cde-437a-99e3-760c9c397977] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:56:30.345152  761851 system_pods.go:61] "etcd-embed-certs-386191" [5ca26226-7c69-4d6b-a513-2187c089d96c] Running
	I1202 20:56:30.345160  761851 system_pods.go:61] "kindnet-x9jsh" [410369de-877d-46e5-8f7c-cd8076d1d2f5] Running
	I1202 20:56:30.345166  761851 system_pods.go:61] "kube-apiserver-embed-certs-386191" [dd2a96bf-6afe-43b8-b01d-a88496c7c9b9] Running
	I1202 20:56:30.345173  761851 system_pods.go:61] "kube-controller-manager-embed-certs-386191" [ba51ea44-285e-494e-a173-75f314cb6a5e] Running
	I1202 20:56:30.345178  761851 system_pods.go:61] "kube-proxy-854r8" [6c9652b0-217c-466f-9345-7364f0e39936] Running
	I1202 20:56:30.345185  761851 system_pods.go:61] "kube-scheduler-embed-certs-386191" [abe313cb-061b-4d65-b0e0-f369aacdc1c4] Running
	I1202 20:56:30.345195  761851 system_pods.go:61] "storage-provisioner" [d37e55bb-bb1f-4659-a9c5-14d47011bd23] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:56:30.345205  761851 system_pods.go:74] duration metric: took 4.796405ms to wait for pod list to return data ...
	I1202 20:56:30.345227  761851 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:56:30.348608  761851 default_sa.go:45] found service account: "default"
	I1202 20:56:30.348639  761851 default_sa.go:55] duration metric: took 3.40167ms for default service account to be created ...
	I1202 20:56:30.348652  761851 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 20:56:30.352973  761851 system_pods.go:86] 8 kube-system pods found
	I1202 20:56:30.353004  761851 system_pods.go:89] "coredns-66bc5c9577-q6l9x" [e7159eb1-3cde-437a-99e3-760c9c397977] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:56:30.353011  761851 system_pods.go:89] "etcd-embed-certs-386191" [5ca26226-7c69-4d6b-a513-2187c089d96c] Running
	I1202 20:56:30.353017  761851 system_pods.go:89] "kindnet-x9jsh" [410369de-877d-46e5-8f7c-cd8076d1d2f5] Running
	I1202 20:56:30.353021  761851 system_pods.go:89] "kube-apiserver-embed-certs-386191" [dd2a96bf-6afe-43b8-b01d-a88496c7c9b9] Running
	I1202 20:56:30.353025  761851 system_pods.go:89] "kube-controller-manager-embed-certs-386191" [ba51ea44-285e-494e-a173-75f314cb6a5e] Running
	I1202 20:56:30.353028  761851 system_pods.go:89] "kube-proxy-854r8" [6c9652b0-217c-466f-9345-7364f0e39936] Running
	I1202 20:56:30.353031  761851 system_pods.go:89] "kube-scheduler-embed-certs-386191" [abe313cb-061b-4d65-b0e0-f369aacdc1c4] Running
	I1202 20:56:30.353036  761851 system_pods.go:89] "storage-provisioner" [d37e55bb-bb1f-4659-a9c5-14d47011bd23] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:56:30.353064  761851 retry.go:31] will retry after 268.066085ms: missing components: kube-dns
	I1202 20:56:30.626568  761851 system_pods.go:86] 8 kube-system pods found
	I1202 20:56:30.626621  761851 system_pods.go:89] "coredns-66bc5c9577-q6l9x" [e7159eb1-3cde-437a-99e3-760c9c397977] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:56:30.626630  761851 system_pods.go:89] "etcd-embed-certs-386191" [5ca26226-7c69-4d6b-a513-2187c089d96c] Running
	I1202 20:56:30.626639  761851 system_pods.go:89] "kindnet-x9jsh" [410369de-877d-46e5-8f7c-cd8076d1d2f5] Running
	I1202 20:56:30.626645  761851 system_pods.go:89] "kube-apiserver-embed-certs-386191" [dd2a96bf-6afe-43b8-b01d-a88496c7c9b9] Running
	I1202 20:56:30.626656  761851 system_pods.go:89] "kube-controller-manager-embed-certs-386191" [ba51ea44-285e-494e-a173-75f314cb6a5e] Running
	I1202 20:56:30.626662  761851 system_pods.go:89] "kube-proxy-854r8" [6c9652b0-217c-466f-9345-7364f0e39936] Running
	I1202 20:56:30.626675  761851 system_pods.go:89] "kube-scheduler-embed-certs-386191" [abe313cb-061b-4d65-b0e0-f369aacdc1c4] Running
	I1202 20:56:30.626687  761851 system_pods.go:89] "storage-provisioner" [d37e55bb-bb1f-4659-a9c5-14d47011bd23] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:56:30.626708  761851 retry.go:31] will retry after 295.685816ms: missing components: kube-dns
	I1202 20:56:30.926543  761851 system_pods.go:86] 8 kube-system pods found
	I1202 20:56:30.926598  761851 system_pods.go:89] "coredns-66bc5c9577-q6l9x" [e7159eb1-3cde-437a-99e3-760c9c397977] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:56:30.926608  761851 system_pods.go:89] "etcd-embed-certs-386191" [5ca26226-7c69-4d6b-a513-2187c089d96c] Running
	I1202 20:56:30.926615  761851 system_pods.go:89] "kindnet-x9jsh" [410369de-877d-46e5-8f7c-cd8076d1d2f5] Running
	I1202 20:56:30.926621  761851 system_pods.go:89] "kube-apiserver-embed-certs-386191" [dd2a96bf-6afe-43b8-b01d-a88496c7c9b9] Running
	I1202 20:56:30.926628  761851 system_pods.go:89] "kube-controller-manager-embed-certs-386191" [ba51ea44-285e-494e-a173-75f314cb6a5e] Running
	I1202 20:56:30.926634  761851 system_pods.go:89] "kube-proxy-854r8" [6c9652b0-217c-466f-9345-7364f0e39936] Running
	I1202 20:56:30.926639  761851 system_pods.go:89] "kube-scheduler-embed-certs-386191" [abe313cb-061b-4d65-b0e0-f369aacdc1c4] Running
	I1202 20:56:30.926646  761851 system_pods.go:89] "storage-provisioner" [d37e55bb-bb1f-4659-a9c5-14d47011bd23] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:56:30.926671  761851 retry.go:31] will retry after 481.864787ms: missing components: kube-dns
	I1202 20:56:31.413061  761851 system_pods.go:86] 8 kube-system pods found
	I1202 20:56:31.413118  761851 system_pods.go:89] "coredns-66bc5c9577-q6l9x" [e7159eb1-3cde-437a-99e3-760c9c397977] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:56:31.413126  761851 system_pods.go:89] "etcd-embed-certs-386191" [5ca26226-7c69-4d6b-a513-2187c089d96c] Running
	I1202 20:56:31.413131  761851 system_pods.go:89] "kindnet-x9jsh" [410369de-877d-46e5-8f7c-cd8076d1d2f5] Running
	I1202 20:56:31.413134  761851 system_pods.go:89] "kube-apiserver-embed-certs-386191" [dd2a96bf-6afe-43b8-b01d-a88496c7c9b9] Running
	I1202 20:56:31.413141  761851 system_pods.go:89] "kube-controller-manager-embed-certs-386191" [ba51ea44-285e-494e-a173-75f314cb6a5e] Running
	I1202 20:56:31.413146  761851 system_pods.go:89] "kube-proxy-854r8" [6c9652b0-217c-466f-9345-7364f0e39936] Running
	I1202 20:56:31.413151  761851 system_pods.go:89] "kube-scheduler-embed-certs-386191" [abe313cb-061b-4d65-b0e0-f369aacdc1c4] Running
	I1202 20:56:31.413158  761851 system_pods.go:89] "storage-provisioner" [d37e55bb-bb1f-4659-a9c5-14d47011bd23] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:56:31.413178  761851 retry.go:31] will retry after 524.282357ms: missing components: kube-dns
	I1202 20:56:31.942153  761851 system_pods.go:86] 8 kube-system pods found
	I1202 20:56:31.942180  761851 system_pods.go:89] "coredns-66bc5c9577-q6l9x" [e7159eb1-3cde-437a-99e3-760c9c397977] Running
	I1202 20:56:31.942185  761851 system_pods.go:89] "etcd-embed-certs-386191" [5ca26226-7c69-4d6b-a513-2187c089d96c] Running
	I1202 20:56:31.942189  761851 system_pods.go:89] "kindnet-x9jsh" [410369de-877d-46e5-8f7c-cd8076d1d2f5] Running
	I1202 20:56:31.942192  761851 system_pods.go:89] "kube-apiserver-embed-certs-386191" [dd2a96bf-6afe-43b8-b01d-a88496c7c9b9] Running
	I1202 20:56:31.942196  761851 system_pods.go:89] "kube-controller-manager-embed-certs-386191" [ba51ea44-285e-494e-a173-75f314cb6a5e] Running
	I1202 20:56:31.942199  761851 system_pods.go:89] "kube-proxy-854r8" [6c9652b0-217c-466f-9345-7364f0e39936] Running
	I1202 20:56:31.942202  761851 system_pods.go:89] "kube-scheduler-embed-certs-386191" [abe313cb-061b-4d65-b0e0-f369aacdc1c4] Running
	I1202 20:56:31.942205  761851 system_pods.go:89] "storage-provisioner" [d37e55bb-bb1f-4659-a9c5-14d47011bd23] Running
	I1202 20:56:31.942212  761851 system_pods.go:126] duration metric: took 1.593529924s to wait for k8s-apps to be running ...
	I1202 20:56:31.942219  761851 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 20:56:31.942261  761851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:56:31.955055  761851 system_svc.go:56] duration metric: took 12.827769ms WaitForService to wait for kubelet
	I1202 20:56:31.955097  761851 kubeadm.go:587] duration metric: took 14.423443169s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:56:31.955121  761851 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:56:31.958210  761851 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 20:56:31.958249  761851 node_conditions.go:123] node cpu capacity is 8
	I1202 20:56:31.958265  761851 node_conditions.go:105] duration metric: took 3.138976ms to run NodePressure ...
	I1202 20:56:31.958278  761851 start.go:242] waiting for startup goroutines ...
	I1202 20:56:31.958285  761851 start.go:247] waiting for cluster config update ...
	I1202 20:56:31.958296  761851 start.go:256] writing updated cluster config ...
	I1202 20:56:31.958597  761851 ssh_runner.go:195] Run: rm -f paused
	I1202 20:56:31.962581  761851 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:56:31.966130  761851 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q6l9x" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:31.971173  761851 pod_ready.go:94] pod "coredns-66bc5c9577-q6l9x" is "Ready"
	I1202 20:56:31.971201  761851 pod_ready.go:86] duration metric: took 5.04828ms for pod "coredns-66bc5c9577-q6l9x" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:31.973411  761851 pod_ready.go:83] waiting for pod "etcd-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:31.978228  761851 pod_ready.go:94] pod "etcd-embed-certs-386191" is "Ready"
	I1202 20:56:31.978263  761851 pod_ready.go:86] duration metric: took 4.826356ms for pod "etcd-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:31.980684  761851 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:31.984771  761851 pod_ready.go:94] pod "kube-apiserver-embed-certs-386191" is "Ready"
	I1202 20:56:31.984803  761851 pod_ready.go:86] duration metric: took 4.09504ms for pod "kube-apiserver-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:31.986878  761851 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.367606  761851 pod_ready.go:94] pod "kube-controller-manager-embed-certs-386191" is "Ready"
	I1202 20:56:32.367637  761851 pod_ready.go:86] duration metric: took 380.737416ms for pod "kube-controller-manager-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.567519  761851 pod_ready.go:83] waiting for pod "kube-proxy-854r8" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.967144  761851 pod_ready.go:94] pod "kube-proxy-854r8" is "Ready"
	I1202 20:56:32.967177  761851 pod_ready.go:86] duration metric: took 399.625971ms for pod "kube-proxy-854r8" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:33.168115  761851 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:33.566983  761851 pod_ready.go:94] pod "kube-scheduler-embed-certs-386191" is "Ready"
	I1202 20:56:33.567015  761851 pod_ready.go:86] duration metric: took 398.86856ms for pod "kube-scheduler-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:33.567030  761851 pod_ready.go:40] duration metric: took 1.604412945s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:56:33.625323  761851 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 20:56:33.627128  761851 out.go:179] * Done! kubectl is now configured to use "embed-certs-386191" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 20:56:30 embed-certs-386191 crio[773]: time="2025-12-02T20:56:30.489753762Z" level=info msg="Starting container: 54ada9764677d51fa703458d41549b0719925ade7cf374bdff1bea565edfeddd" id=d1cfaf7d-6012-48af-ae91-4b2a7b4a8f55 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:56:30 embed-certs-386191 crio[773]: time="2025-12-02T20:56:30.492029966Z" level=info msg="Started container" PID=1843 containerID=54ada9764677d51fa703458d41549b0719925ade7cf374bdff1bea565edfeddd description=kube-system/coredns-66bc5c9577-q6l9x/coredns id=d1cfaf7d-6012-48af-ae91-4b2a7b4a8f55 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6b97c6aff937254fcac23f6f92c1ffebd32e86765a625cb469068948628b2b7f
	Dec 02 20:56:34 embed-certs-386191 crio[773]: time="2025-12-02T20:56:34.119534818Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c141724a-42e6-4996-929e-e56ac6cbcdf9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 20:56:34 embed-certs-386191 crio[773]: time="2025-12-02T20:56:34.119625727Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:34 embed-certs-386191 crio[773]: time="2025-12-02T20:56:34.124538994Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8474d1042cec4bb33d03a70b536bc292602e7f09f1bc117fd965baf52252c788 UID:ed12a6fb-53bf-431f-b98e-7d12c1f8a178 NetNS:/var/run/netns/5be939e2-2104-465d-aa63-1e1e5693883f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000132bc8}] Aliases:map[]}"
	Dec 02 20:56:34 embed-certs-386191 crio[773]: time="2025-12-02T20:56:34.124571876Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 02 20:56:34 embed-certs-386191 crio[773]: time="2025-12-02T20:56:34.135185686Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8474d1042cec4bb33d03a70b536bc292602e7f09f1bc117fd965baf52252c788 UID:ed12a6fb-53bf-431f-b98e-7d12c1f8a178 NetNS:/var/run/netns/5be939e2-2104-465d-aa63-1e1e5693883f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000132bc8}] Aliases:map[]}"
	Dec 02 20:56:34 embed-certs-386191 crio[773]: time="2025-12-02T20:56:34.13537927Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 02 20:56:34 embed-certs-386191 crio[773]: time="2025-12-02T20:56:34.136240371Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 02 20:56:34 embed-certs-386191 crio[773]: time="2025-12-02T20:56:34.137035991Z" level=info msg="Ran pod sandbox 8474d1042cec4bb33d03a70b536bc292602e7f09f1bc117fd965baf52252c788 with infra container: default/busybox/POD" id=c141724a-42e6-4996-929e-e56ac6cbcdf9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 20:56:34 embed-certs-386191 crio[773]: time="2025-12-02T20:56:34.138467318Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0f66f839-cab2-4f1f-b08b-096396dad812 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:56:34 embed-certs-386191 crio[773]: time="2025-12-02T20:56:34.138598117Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0f66f839-cab2-4f1f-b08b-096396dad812 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:56:34 embed-certs-386191 crio[773]: time="2025-12-02T20:56:34.13865038Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=0f66f839-cab2-4f1f-b08b-096396dad812 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:56:34 embed-certs-386191 crio[773]: time="2025-12-02T20:56:34.139560868Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3b8e4603-cf29-4c7c-80ba-d48e4f5b216e name=/runtime.v1.ImageService/PullImage
	Dec 02 20:56:34 embed-certs-386191 crio[773]: time="2025-12-02T20:56:34.141443604Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 02 20:56:36 embed-certs-386191 crio[773]: time="2025-12-02T20:56:36.075610967Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=3b8e4603-cf29-4c7c-80ba-d48e4f5b216e name=/runtime.v1.ImageService/PullImage
	Dec 02 20:56:36 embed-certs-386191 crio[773]: time="2025-12-02T20:56:36.076427721Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5cd99946-5826-4b60-88a3-0a5bc3ea35b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:56:36 embed-certs-386191 crio[773]: time="2025-12-02T20:56:36.077925346Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b6cb27aa-dc77-4b4c-828d-58acd5cd9681 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:56:36 embed-certs-386191 crio[773]: time="2025-12-02T20:56:36.081575015Z" level=info msg="Creating container: default/busybox/busybox" id=190853aa-f083-440f-888a-ca69e018dc43 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:56:36 embed-certs-386191 crio[773]: time="2025-12-02T20:56:36.081720948Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:36 embed-certs-386191 crio[773]: time="2025-12-02T20:56:36.086028014Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:36 embed-certs-386191 crio[773]: time="2025-12-02T20:56:36.08648828Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:36 embed-certs-386191 crio[773]: time="2025-12-02T20:56:36.115760427Z" level=info msg="Created container 53af509992d58a8bc87a1a58a93266e1c440002de8395af25f4a5402c9ec9558: default/busybox/busybox" id=190853aa-f083-440f-888a-ca69e018dc43 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:56:36 embed-certs-386191 crio[773]: time="2025-12-02T20:56:36.116672399Z" level=info msg="Starting container: 53af509992d58a8bc87a1a58a93266e1c440002de8395af25f4a5402c9ec9558" id=90cf78c5-188a-4eeb-b4b6-3118aec66f55 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:56:36 embed-certs-386191 crio[773]: time="2025-12-02T20:56:36.118740354Z" level=info msg="Started container" PID=1919 containerID=53af509992d58a8bc87a1a58a93266e1c440002de8395af25f4a5402c9ec9558 description=default/busybox/busybox id=90cf78c5-188a-4eeb-b4b6-3118aec66f55 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8474d1042cec4bb33d03a70b536bc292602e7f09f1bc117fd965baf52252c788
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	53af509992d58       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   8474d1042cec4       busybox                                      default
	54ada9764677d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   6b97c6aff9372       coredns-66bc5c9577-q6l9x                     kube-system
	3307811b2f63a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   4da2e6f293839       storage-provisioner                          kube-system
	06d83f8377d2a       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      25 seconds ago      Running             kube-proxy                0                   477b04ecc71b5       kube-proxy-854r8                             kube-system
	83a4272218108       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      25 seconds ago      Running             kindnet-cni               0                   4ab3eb1de7bb9       kindnet-x9jsh                                kube-system
	05953a73c6c82       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      36 seconds ago      Running             etcd                      0                   6464ac07969f1       etcd-embed-certs-386191                      kube-system
	12e53b5f5f8b2       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      36 seconds ago      Running             kube-apiserver            0                   f5052638a511f       kube-apiserver-embed-certs-386191            kube-system
	6fe062a0482bc       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      36 seconds ago      Running             kube-controller-manager   0                   828b007149a79       kube-controller-manager-embed-certs-386191   kube-system
	e1b6c0205e8ca       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      36 seconds ago      Running             kube-scheduler            0                   b6e990ce527c7       kube-scheduler-embed-certs-386191            kube-system
	
	
	==> coredns [54ada9764677d51fa703458d41549b0719925ade7cf374bdff1bea565edfeddd] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46238 - 58018 "HINFO IN 6105477955425697157.5570652387892170558. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019104207s
	
	
	==> describe nodes <==
	Name:               embed-certs-386191
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-386191
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=embed-certs-386191
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T20_56_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 20:56:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-386191
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:56:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:56:43 +0000   Tue, 02 Dec 2025 20:56:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:56:43 +0000   Tue, 02 Dec 2025 20:56:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:56:43 +0000   Tue, 02 Dec 2025 20:56:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:56:43 +0000   Tue, 02 Dec 2025 20:56:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-386191
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                f83f142d-7c61-4329-95b4-56ae3cea973b
	  Boot ID:                    9dd5456f-e394-4b4b-9458-48d26faf507e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-q6l9x                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-embed-certs-386191                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-x9jsh                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-embed-certs-386191             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-embed-certs-386191    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-854r8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-embed-certs-386191             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node embed-certs-386191 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node embed-certs-386191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node embed-certs-386191 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node embed-certs-386191 event: Registered Node embed-certs-386191 in Controller
	  Normal  NodeReady                14s   kubelet          Node embed-certs-386191 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[ +32.254238] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[Dec 2 20:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 03 bd 14 45 8a 08 06
	[  +0.000590] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 96 27 ad 0d 40 04 08 06
	[Dec 2 20:53] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	[  +0.000700] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 e4 ba c0 78 5f 08 06
	[ +10.119645] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000022] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[  +2.447166] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 df 09 53 d6 6e 08 06
	[  +0.000374] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 8d 06 71 0a 5e 08 06
	[Dec 2 20:54] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 12 47 13 50 f6 bc 08 06
	[  +0.001523] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[ +22.123549] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 0d 45 06 42 2a 08 06
	[  +0.000389] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	
	
	==> etcd [05953a73c6c8273ac1f61fe0602dcc459f743953ee01eb3a1632ac6d44df190a] <==
	{"level":"warn","ts":"2025-12-02T20:56:09.196382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:56:09.204237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:56:09.211267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:56:09.221903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:56:09.236280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:56:09.243115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:56:09.256616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:56:09.263166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:56:09.269906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:56:09.277786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:56:09.287225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:56:09.294105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:56:09.301217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:56:09.308254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:56:09.315055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:56:09.328963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:56:09.335984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:56:09.342612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:56:09.349637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:56:09.356798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:56:09.364261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:56:09.386187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:56:09.394574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:56:09.402270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:56:09.452263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34318","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:56:44 up  2:39,  0 user,  load average: 3.57, 3.96, 2.70
	Linux embed-certs-386191 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [83a4272218108ccca95380f30286c9153154163bb40532030f4a388039a359ee] <==
	I1202 20:56:19.285968       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 20:56:19.301831       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1202 20:56:19.302043       1 main.go:148] setting mtu 1500 for CNI 
	I1202 20:56:19.302113       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 20:56:19.302155       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T20:56:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 20:56:19.502456       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 20:56:19.503013       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 20:56:19.503030       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 20:56:19.503226       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 20:56:19.903364       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 20:56:19.903403       1 metrics.go:72] Registering metrics
	I1202 20:56:19.903488       1 controller.go:711] "Syncing nftables rules"
	I1202 20:56:29.503108       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1202 20:56:29.503213       1 main.go:301] handling current node
	I1202 20:56:39.504704       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1202 20:56:39.504762       1 main.go:301] handling current node
	
	
	==> kube-apiserver [12e53b5f5f8b2b6a550f53ea7cab6cc004af77b82d0204477f6a925d9072711f] <==
	I1202 20:56:09.918810       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 20:56:09.920848       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1202 20:56:09.921941       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:56:09.922059       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1202 20:56:09.929818       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1202 20:56:09.929978       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:56:10.110552       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 20:56:10.823398       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1202 20:56:10.827945       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1202 20:56:10.827967       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 20:56:11.368603       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 20:56:11.411889       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 20:56:11.527961       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1202 20:56:11.534527       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1202 20:56:11.535743       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 20:56:11.540483       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 20:56:11.853522       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 20:56:12.526677       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 20:56:12.538833       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1202 20:56:12.548771       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 20:56:16.856380       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1202 20:56:17.660909       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:56:17.666219       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:56:17.956030       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1202 20:56:42.910219       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:58468: use of closed network connection
	
	
	==> kube-controller-manager [6fe062a0482bcf5b5eef76d507329773027c77cf7f62da32496fafb72a7d22c1] <==
	I1202 20:56:16.823897       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1202 20:56:16.829607       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1202 20:56:16.853254       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1202 20:56:16.853277       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1202 20:56:16.853269       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1202 20:56:16.853380       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1202 20:56:16.853391       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1202 20:56:16.853400       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1202 20:56:16.853457       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1202 20:56:16.853496       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1202 20:56:16.853541       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1202 20:56:16.853583       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-386191"
	I1202 20:56:16.853643       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1202 20:56:16.853693       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1202 20:56:16.853897       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1202 20:56:16.853980       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1202 20:56:16.853997       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1202 20:56:16.854264       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1202 20:56:16.854563       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 20:56:16.855614       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 20:56:16.856327       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1202 20:56:16.859054       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1202 20:56:16.859274       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 20:56:16.876184       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 20:56:31.854860       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [06d83f8377d2a0cc62547d642d9df442b507d0dedcfc332ec3f3fd026d7acba7] <==
	I1202 20:56:19.080752       1 server_linux.go:53] "Using iptables proxy"
	I1202 20:56:19.147113       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 20:56:19.247483       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 20:56:19.247518       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1202 20:56:19.247624       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 20:56:19.268062       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 20:56:19.268166       1 server_linux.go:132] "Using iptables Proxier"
	I1202 20:56:19.273555       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 20:56:19.273900       1 server.go:527] "Version info" version="v1.34.2"
	I1202 20:56:19.273940       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:56:19.275430       1 config.go:309] "Starting node config controller"
	I1202 20:56:19.275502       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 20:56:19.275519       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 20:56:19.275522       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 20:56:19.275546       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 20:56:19.275586       1 config.go:200] "Starting service config controller"
	I1202 20:56:19.275623       1 config.go:106] "Starting endpoint slice config controller"
	I1202 20:56:19.275668       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 20:56:19.275623       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 20:56:19.375786       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 20:56:19.375802       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 20:56:19.375822       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e1b6c0205e8ca874d79bbf9bc11dd1d3fdce8f10ab91c4ab3e738c256d2095b4] <==
	E1202 20:56:09.866717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 20:56:09.866755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 20:56:09.866797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 20:56:09.867749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 20:56:09.868000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 20:56:09.868005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 20:56:09.867910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 20:56:09.868438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 20:56:09.868481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 20:56:09.868161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 20:56:09.868627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 20:56:09.868643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 20:56:09.868749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 20:56:09.869012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 20:56:09.869207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 20:56:10.672375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 20:56:10.717773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1202 20:56:10.803859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 20:56:10.837154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 20:56:10.976089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 20:56:11.026430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 20:56:11.034759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 20:56:11.095263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 20:56:11.130561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1202 20:56:12.763785       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 20:56:16 embed-certs-386191 kubelet[1321]: I1202 20:56:16.976685    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/410369de-877d-46e5-8f7c-cd8076d1d2f5-lib-modules\") pod \"kindnet-x9jsh\" (UID: \"410369de-877d-46e5-8f7c-cd8076d1d2f5\") " pod="kube-system/kindnet-x9jsh"
	Dec 02 20:56:16 embed-certs-386191 kubelet[1321]: I1202 20:56:16.976707    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c9652b0-217c-466f-9345-7364f0e39936-lib-modules\") pod \"kube-proxy-854r8\" (UID: \"6c9652b0-217c-466f-9345-7364f0e39936\") " pod="kube-system/kube-proxy-854r8"
	Dec 02 20:56:17 embed-certs-386191 kubelet[1321]: E1202 20:56:17.084238    1321 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 02 20:56:17 embed-certs-386191 kubelet[1321]: E1202 20:56:17.084281    1321 projected.go:196] Error preparing data for projected volume kube-api-access-v8xjn for pod kube-system/kindnet-x9jsh: configmap "kube-root-ca.crt" not found
	Dec 02 20:56:17 embed-certs-386191 kubelet[1321]: E1202 20:56:17.084373    1321 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/410369de-877d-46e5-8f7c-cd8076d1d2f5-kube-api-access-v8xjn podName:410369de-877d-46e5-8f7c-cd8076d1d2f5 nodeName:}" failed. No retries permitted until 2025-12-02 20:56:17.584343935 +0000 UTC m=+5.310253681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v8xjn" (UniqueName: "kubernetes.io/projected/410369de-877d-46e5-8f7c-cd8076d1d2f5-kube-api-access-v8xjn") pod "kindnet-x9jsh" (UID: "410369de-877d-46e5-8f7c-cd8076d1d2f5") : configmap "kube-root-ca.crt" not found
	Dec 02 20:56:17 embed-certs-386191 kubelet[1321]: E1202 20:56:17.084493    1321 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 02 20:56:17 embed-certs-386191 kubelet[1321]: E1202 20:56:17.084518    1321 projected.go:196] Error preparing data for projected volume kube-api-access-xhfgh for pod kube-system/kube-proxy-854r8: configmap "kube-root-ca.crt" not found
	Dec 02 20:56:17 embed-certs-386191 kubelet[1321]: E1202 20:56:17.084575    1321 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c9652b0-217c-466f-9345-7364f0e39936-kube-api-access-xhfgh podName:6c9652b0-217c-466f-9345-7364f0e39936 nodeName:}" failed. No retries permitted until 2025-12-02 20:56:17.584553917 +0000 UTC m=+5.310463672 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xhfgh" (UniqueName: "kubernetes.io/projected/6c9652b0-217c-466f-9345-7364f0e39936-kube-api-access-xhfgh") pod "kube-proxy-854r8" (UID: "6c9652b0-217c-466f-9345-7364f0e39936") : configmap "kube-root-ca.crt" not found
	Dec 02 20:56:17 embed-certs-386191 kubelet[1321]: E1202 20:56:17.684854    1321 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 02 20:56:17 embed-certs-386191 kubelet[1321]: E1202 20:56:17.684896    1321 projected.go:196] Error preparing data for projected volume kube-api-access-xhfgh for pod kube-system/kube-proxy-854r8: configmap "kube-root-ca.crt" not found
	Dec 02 20:56:17 embed-certs-386191 kubelet[1321]: E1202 20:56:17.684987    1321 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c9652b0-217c-466f-9345-7364f0e39936-kube-api-access-xhfgh podName:6c9652b0-217c-466f-9345-7364f0e39936 nodeName:}" failed. No retries permitted until 2025-12-02 20:56:18.684950874 +0000 UTC m=+6.410860627 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xhfgh" (UniqueName: "kubernetes.io/projected/6c9652b0-217c-466f-9345-7364f0e39936-kube-api-access-xhfgh") pod "kube-proxy-854r8" (UID: "6c9652b0-217c-466f-9345-7364f0e39936") : configmap "kube-root-ca.crt" not found
	Dec 02 20:56:17 embed-certs-386191 kubelet[1321]: E1202 20:56:17.684854    1321 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 02 20:56:17 embed-certs-386191 kubelet[1321]: E1202 20:56:17.685013    1321 projected.go:196] Error preparing data for projected volume kube-api-access-v8xjn for pod kube-system/kindnet-x9jsh: configmap "kube-root-ca.crt" not found
	Dec 02 20:56:17 embed-certs-386191 kubelet[1321]: E1202 20:56:17.685063    1321 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/410369de-877d-46e5-8f7c-cd8076d1d2f5-kube-api-access-v8xjn podName:410369de-877d-46e5-8f7c-cd8076d1d2f5 nodeName:}" failed. No retries permitted until 2025-12-02 20:56:18.685039539 +0000 UTC m=+6.410949299 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-v8xjn" (UniqueName: "kubernetes.io/projected/410369de-877d-46e5-8f7c-cd8076d1d2f5-kube-api-access-v8xjn") pod "kindnet-x9jsh" (UID: "410369de-877d-46e5-8f7c-cd8076d1d2f5") : configmap "kube-root-ca.crt" not found
	Dec 02 20:56:19 embed-certs-386191 kubelet[1321]: I1202 20:56:19.421465    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-x9jsh" podStartSLOduration=3.421441718 podStartE2EDuration="3.421441718s" podCreationTimestamp="2025-12-02 20:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:56:19.4210057 +0000 UTC m=+7.146915479" watchObservedRunningTime="2025-12-02 20:56:19.421441718 +0000 UTC m=+7.147351480"
	Dec 02 20:56:19 embed-certs-386191 kubelet[1321]: I1202 20:56:19.421603    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-854r8" podStartSLOduration=3.421596141 podStartE2EDuration="3.421596141s" podCreationTimestamp="2025-12-02 20:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:56:19.409833048 +0000 UTC m=+7.135742811" watchObservedRunningTime="2025-12-02 20:56:19.421596141 +0000 UTC m=+7.147505904"
	Dec 02 20:56:30 embed-certs-386191 kubelet[1321]: I1202 20:56:30.098790    1321 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 02 20:56:30 embed-certs-386191 kubelet[1321]: I1202 20:56:30.170852    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d37e55bb-bb1f-4659-a9c5-14d47011bd23-tmp\") pod \"storage-provisioner\" (UID: \"d37e55bb-bb1f-4659-a9c5-14d47011bd23\") " pod="kube-system/storage-provisioner"
	Dec 02 20:56:30 embed-certs-386191 kubelet[1321]: I1202 20:56:30.170907    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7159eb1-3cde-437a-99e3-760c9c397977-config-volume\") pod \"coredns-66bc5c9577-q6l9x\" (UID: \"e7159eb1-3cde-437a-99e3-760c9c397977\") " pod="kube-system/coredns-66bc5c9577-q6l9x"
	Dec 02 20:56:30 embed-certs-386191 kubelet[1321]: I1202 20:56:30.170942    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwk2c\" (UniqueName: \"kubernetes.io/projected/d37e55bb-bb1f-4659-a9c5-14d47011bd23-kube-api-access-dwk2c\") pod \"storage-provisioner\" (UID: \"d37e55bb-bb1f-4659-a9c5-14d47011bd23\") " pod="kube-system/storage-provisioner"
	Dec 02 20:56:30 embed-certs-386191 kubelet[1321]: I1202 20:56:30.170971    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trzrg\" (UniqueName: \"kubernetes.io/projected/e7159eb1-3cde-437a-99e3-760c9c397977-kube-api-access-trzrg\") pod \"coredns-66bc5c9577-q6l9x\" (UID: \"e7159eb1-3cde-437a-99e3-760c9c397977\") " pod="kube-system/coredns-66bc5c9577-q6l9x"
	Dec 02 20:56:31 embed-certs-386191 kubelet[1321]: I1202 20:56:31.441805    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-q6l9x" podStartSLOduration=13.44178244 podStartE2EDuration="13.44178244s" podCreationTimestamp="2025-12-02 20:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:56:31.441652654 +0000 UTC m=+19.167562418" watchObservedRunningTime="2025-12-02 20:56:31.44178244 +0000 UTC m=+19.167692205"
	Dec 02 20:56:31 embed-certs-386191 kubelet[1321]: I1202 20:56:31.452014    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.451992506 podStartE2EDuration="14.451992506s" podCreationTimestamp="2025-12-02 20:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:56:31.451842036 +0000 UTC m=+19.177751799" watchObservedRunningTime="2025-12-02 20:56:31.451992506 +0000 UTC m=+19.177902269"
	Dec 02 20:56:33 embed-certs-386191 kubelet[1321]: I1202 20:56:33.896410    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4sht\" (UniqueName: \"kubernetes.io/projected/ed12a6fb-53bf-431f-b98e-7d12c1f8a178-kube-api-access-t4sht\") pod \"busybox\" (UID: \"ed12a6fb-53bf-431f-b98e-7d12c1f8a178\") " pod="default/busybox"
	Dec 02 20:56:36 embed-certs-386191 kubelet[1321]: I1202 20:56:36.460964    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.522578481 podStartE2EDuration="3.460940316s" podCreationTimestamp="2025-12-02 20:56:33 +0000 UTC" firstStartedPulling="2025-12-02 20:56:34.139022768 +0000 UTC m=+21.864932510" lastFinishedPulling="2025-12-02 20:56:36.077384584 +0000 UTC m=+23.803294345" observedRunningTime="2025-12-02 20:56:36.460661725 +0000 UTC m=+24.186571485" watchObservedRunningTime="2025-12-02 20:56:36.460940316 +0000 UTC m=+24.186850078"
	
	
	==> storage-provisioner [3307811b2f63a3aaca8b768a7f75102a7084bbda9632bcdac08a771d75e006b8] <==
	I1202 20:56:30.499206       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 20:56:30.507824       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 20:56:30.507947       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1202 20:56:30.510656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:30.515875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 20:56:30.516093       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 20:56:30.516297       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"34c56701-7501-4c39-8645-5294da9c60ee", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-386191_9b1ee64e-7467-443a-b5a6-4a1282d87114 became leader
	I1202 20:56:30.516334       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-386191_9b1ee64e-7467-443a-b5a6-4a1282d87114!
	W1202 20:56:30.519644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:30.523514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 20:56:30.617652       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-386191_9b1ee64e-7467-443a-b5a6-4a1282d87114!
	W1202 20:56:32.526631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:32.531341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:34.535270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:34.540680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:36.544195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:36.548495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:38.551467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:38.556870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:40.560389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:40.564644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:42.568197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:42.573992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:44.577184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:44.582824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-386191 -n embed-certs-386191
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-386191 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-997805 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-997805 --alsologtostderr -v=1: exit status 80 (2.390056026s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-997805 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 20:56:45.297469  771582 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:56:45.297762  771582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:56:45.297773  771582 out.go:374] Setting ErrFile to fd 2...
	I1202 20:56:45.297780  771582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:56:45.297980  771582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:56:45.298282  771582 out.go:368] Setting JSON to false
	I1202 20:56:45.298307  771582 mustload.go:66] Loading cluster: default-k8s-diff-port-997805
	I1202 20:56:45.298694  771582 config.go:182] Loaded profile config "default-k8s-diff-port-997805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:56:45.299128  771582 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:56:45.318690  771582 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:56:45.319098  771582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:56:45.384979  771582 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:false NGoroutines:70 SystemTime:2025-12-02 20:56:45.374206648 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:56:45.385684  771582 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-997805 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1202 20:56:45.388056  771582 out.go:179] * Pausing node default-k8s-diff-port-997805 ... 
	I1202 20:56:45.389775  771582 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:56:45.390154  771582 ssh_runner.go:195] Run: systemctl --version
	I1202 20:56:45.390220  771582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:56:45.408883  771582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:56:45.518153  771582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:56:45.540674  771582 pause.go:52] kubelet running: true
	I1202 20:56:45.540752  771582 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 20:56:45.700185  771582 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 20:56:45.700290  771582 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 20:56:45.769168  771582 cri.go:89] found id: "35e720802a1bf3bbed62adc89a0f19dce7a67de2db637573eb1894ab9ebb8f24"
	I1202 20:56:45.769198  771582 cri.go:89] found id: "f06d54a2384df567756e9be0cfb30d79b223d7ca905c4709c051828f8e793c87"
	I1202 20:56:45.769213  771582 cri.go:89] found id: "1e15bb4007b6f6ac5c5aba376e81233c28da69653a99ea88226c07cfeee8a9a7"
	I1202 20:56:45.769220  771582 cri.go:89] found id: "5ad0a1655ba23d5613d29f48e14efa7b904937342c2b4f154af87389ad6ae5a9"
	I1202 20:56:45.769225  771582 cri.go:89] found id: "fc477a72b765693b81689208ff42b491035d31c49ea6b43c64099d495e7cec00"
	I1202 20:56:45.769231  771582 cri.go:89] found id: "25e14e8feafb6c0d6c5261cd5e507b812e39fcb9c7e196408fe69d780ebbcd1d"
	I1202 20:56:45.769236  771582 cri.go:89] found id: "0c7e2844e2dbdbf5b9ffe8bf4e8d07304b64b059e3d4c965c2010c5d8a39c499"
	I1202 20:56:45.769240  771582 cri.go:89] found id: "81b0ec87511a05a7501d98eb27c52f69372a4b30c4ea523db262c140f9b68cd3"
	I1202 20:56:45.769245  771582 cri.go:89] found id: "e13e6c4d6c5da602ac2e1402a7612205c5a0ceffdccf7618da3035e562a7d9d3"
	I1202 20:56:45.769255  771582 cri.go:89] found id: "c9080db2b6daf76ef63b2b59e74d0239edbb838d08547298dd4502c7c3b4d9f4"
	I1202 20:56:45.769264  771582 cri.go:89] found id: "f7c1779df921dc77252b05de7b4552d502a7c9e38f020d197cbdfd6540d6213a"
	I1202 20:56:45.769268  771582 cri.go:89] found id: ""
	I1202 20:56:45.769315  771582 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 20:56:45.781879  771582 retry.go:31] will retry after 198.44135ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:56:45Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:56:45.981412  771582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:56:45.995476  771582 pause.go:52] kubelet running: false
	I1202 20:56:45.995553  771582 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 20:56:46.131177  771582 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 20:56:46.131291  771582 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 20:56:46.200310  771582 cri.go:89] found id: "35e720802a1bf3bbed62adc89a0f19dce7a67de2db637573eb1894ab9ebb8f24"
	I1202 20:56:46.200336  771582 cri.go:89] found id: "f06d54a2384df567756e9be0cfb30d79b223d7ca905c4709c051828f8e793c87"
	I1202 20:56:46.200341  771582 cri.go:89] found id: "1e15bb4007b6f6ac5c5aba376e81233c28da69653a99ea88226c07cfeee8a9a7"
	I1202 20:56:46.200344  771582 cri.go:89] found id: "5ad0a1655ba23d5613d29f48e14efa7b904937342c2b4f154af87389ad6ae5a9"
	I1202 20:56:46.200347  771582 cri.go:89] found id: "fc477a72b765693b81689208ff42b491035d31c49ea6b43c64099d495e7cec00"
	I1202 20:56:46.200351  771582 cri.go:89] found id: "25e14e8feafb6c0d6c5261cd5e507b812e39fcb9c7e196408fe69d780ebbcd1d"
	I1202 20:56:46.200354  771582 cri.go:89] found id: "0c7e2844e2dbdbf5b9ffe8bf4e8d07304b64b059e3d4c965c2010c5d8a39c499"
	I1202 20:56:46.200356  771582 cri.go:89] found id: "81b0ec87511a05a7501d98eb27c52f69372a4b30c4ea523db262c140f9b68cd3"
	I1202 20:56:46.200359  771582 cri.go:89] found id: "e13e6c4d6c5da602ac2e1402a7612205c5a0ceffdccf7618da3035e562a7d9d3"
	I1202 20:56:46.200369  771582 cri.go:89] found id: "c9080db2b6daf76ef63b2b59e74d0239edbb838d08547298dd4502c7c3b4d9f4"
	I1202 20:56:46.200374  771582 cri.go:89] found id: "f7c1779df921dc77252b05de7b4552d502a7c9e38f020d197cbdfd6540d6213a"
	I1202 20:56:46.200378  771582 cri.go:89] found id: ""
	I1202 20:56:46.200429  771582 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 20:56:46.213795  771582 retry.go:31] will retry after 341.978703ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:56:46Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:56:46.556337  771582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:56:46.571378  771582 pause.go:52] kubelet running: false
	I1202 20:56:46.571445  771582 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 20:56:46.728753  771582 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 20:56:46.728847  771582 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 20:56:46.805837  771582 cri.go:89] found id: "35e720802a1bf3bbed62adc89a0f19dce7a67de2db637573eb1894ab9ebb8f24"
	I1202 20:56:46.805860  771582 cri.go:89] found id: "f06d54a2384df567756e9be0cfb30d79b223d7ca905c4709c051828f8e793c87"
	I1202 20:56:46.805866  771582 cri.go:89] found id: "1e15bb4007b6f6ac5c5aba376e81233c28da69653a99ea88226c07cfeee8a9a7"
	I1202 20:56:46.805871  771582 cri.go:89] found id: "5ad0a1655ba23d5613d29f48e14efa7b904937342c2b4f154af87389ad6ae5a9"
	I1202 20:56:46.805875  771582 cri.go:89] found id: "fc477a72b765693b81689208ff42b491035d31c49ea6b43c64099d495e7cec00"
	I1202 20:56:46.805881  771582 cri.go:89] found id: "25e14e8feafb6c0d6c5261cd5e507b812e39fcb9c7e196408fe69d780ebbcd1d"
	I1202 20:56:46.805884  771582 cri.go:89] found id: "0c7e2844e2dbdbf5b9ffe8bf4e8d07304b64b059e3d4c965c2010c5d8a39c499"
	I1202 20:56:46.805888  771582 cri.go:89] found id: "81b0ec87511a05a7501d98eb27c52f69372a4b30c4ea523db262c140f9b68cd3"
	I1202 20:56:46.805892  771582 cri.go:89] found id: "e13e6c4d6c5da602ac2e1402a7612205c5a0ceffdccf7618da3035e562a7d9d3"
	I1202 20:56:46.805931  771582 cri.go:89] found id: "c9080db2b6daf76ef63b2b59e74d0239edbb838d08547298dd4502c7c3b4d9f4"
	I1202 20:56:46.805941  771582 cri.go:89] found id: "f7c1779df921dc77252b05de7b4552d502a7c9e38f020d197cbdfd6540d6213a"
	I1202 20:56:46.805946  771582 cri.go:89] found id: ""
	I1202 20:56:46.805993  771582 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 20:56:46.819566  771582 retry.go:31] will retry after 556.106358ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:56:46Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:56:47.376369  771582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:56:47.390054  771582 pause.go:52] kubelet running: false
	I1202 20:56:47.390130  771582 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 20:56:47.529426  771582 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 20:56:47.529501  771582 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 20:56:47.597892  771582 cri.go:89] found id: "35e720802a1bf3bbed62adc89a0f19dce7a67de2db637573eb1894ab9ebb8f24"
	I1202 20:56:47.597922  771582 cri.go:89] found id: "f06d54a2384df567756e9be0cfb30d79b223d7ca905c4709c051828f8e793c87"
	I1202 20:56:47.597927  771582 cri.go:89] found id: "1e15bb4007b6f6ac5c5aba376e81233c28da69653a99ea88226c07cfeee8a9a7"
	I1202 20:56:47.597931  771582 cri.go:89] found id: "5ad0a1655ba23d5613d29f48e14efa7b904937342c2b4f154af87389ad6ae5a9"
	I1202 20:56:47.597939  771582 cri.go:89] found id: "fc477a72b765693b81689208ff42b491035d31c49ea6b43c64099d495e7cec00"
	I1202 20:56:47.597943  771582 cri.go:89] found id: "25e14e8feafb6c0d6c5261cd5e507b812e39fcb9c7e196408fe69d780ebbcd1d"
	I1202 20:56:47.597945  771582 cri.go:89] found id: "0c7e2844e2dbdbf5b9ffe8bf4e8d07304b64b059e3d4c965c2010c5d8a39c499"
	I1202 20:56:47.597948  771582 cri.go:89] found id: "81b0ec87511a05a7501d98eb27c52f69372a4b30c4ea523db262c140f9b68cd3"
	I1202 20:56:47.597951  771582 cri.go:89] found id: "e13e6c4d6c5da602ac2e1402a7612205c5a0ceffdccf7618da3035e562a7d9d3"
	I1202 20:56:47.597957  771582 cri.go:89] found id: "c9080db2b6daf76ef63b2b59e74d0239edbb838d08547298dd4502c7c3b4d9f4"
	I1202 20:56:47.597959  771582 cri.go:89] found id: "f7c1779df921dc77252b05de7b4552d502a7c9e38f020d197cbdfd6540d6213a"
	I1202 20:56:47.597988  771582 cri.go:89] found id: ""
	I1202 20:56:47.598036  771582 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 20:56:47.612189  771582 out.go:203] 
	W1202 20:56:47.613631  771582 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:56:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:56:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 20:56:47.613654  771582 out.go:285] * 
	* 
	W1202 20:56:47.618386  771582 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 20:56:47.619829  771582 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-997805 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-997805
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-997805:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c25b25f1d6428e1d0325a0468af68a3a621d93f89c284fc809071a0c5b3636f1",
	        "Created": "2025-12-02T20:54:37.048348832Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 759767,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T20:55:43.909243691Z",
	            "FinishedAt": "2025-12-02T20:55:42.856980855Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/c25b25f1d6428e1d0325a0468af68a3a621d93f89c284fc809071a0c5b3636f1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c25b25f1d6428e1d0325a0468af68a3a621d93f89c284fc809071a0c5b3636f1/hostname",
	        "HostsPath": "/var/lib/docker/containers/c25b25f1d6428e1d0325a0468af68a3a621d93f89c284fc809071a0c5b3636f1/hosts",
	        "LogPath": "/var/lib/docker/containers/c25b25f1d6428e1d0325a0468af68a3a621d93f89c284fc809071a0c5b3636f1/c25b25f1d6428e1d0325a0468af68a3a621d93f89c284fc809071a0c5b3636f1-json.log",
	        "Name": "/default-k8s-diff-port-997805",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-997805:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-997805",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c25b25f1d6428e1d0325a0468af68a3a621d93f89c284fc809071a0c5b3636f1",
	                "LowerDir": "/var/lib/docker/overlay2/438615afda3ee0db74f277419380adcb83f92340686904c8b7104d5c82409f9b-init/diff:/var/lib/docker/overlay2/49bb44e987885c48600d5ae7ad3c81cad82a00f38070c0460882c5746d9fae59/diff",
	                "MergedDir": "/var/lib/docker/overlay2/438615afda3ee0db74f277419380adcb83f92340686904c8b7104d5c82409f9b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/438615afda3ee0db74f277419380adcb83f92340686904c8b7104d5c82409f9b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/438615afda3ee0db74f277419380adcb83f92340686904c8b7104d5c82409f9b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-997805",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-997805/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-997805",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-997805",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-997805",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8838ac99fbe3fe4c9fe647f60f12e972a87928aabd3a210f3a398be9baeeaea0",
	            "SandboxKey": "/var/run/docker/netns/8838ac99fbe3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-997805": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "13fe483902b92417fb08b9a25307f2df4dbcc897dff65b84bbef9f2f680f60c8",
	                    "EndpointID": "8db48d775025987b658fd97692e2ba98a47b3f05f2a7fb48257960ac7ddf18bb",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "f2:87:46:d0:55:1b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-997805",
	                        "c25b25f1d642"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-997805 -n default-k8s-diff-port-997805
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-997805 -n default-k8s-diff-port-997805: exit status 2 (330.044532ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-997805 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-997805 logs -n 25: (1.145716345s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ stop    │ -p default-k8s-diff-port-997805 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable dashboard -p newest-cni-245604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p newest-cni-245604 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable dashboard -p no-preload-336331 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p no-preload-336331 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:56 UTC │
	│ image   │ newest-cni-245604 image list --format=json                                                                                                                                                                                                           │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ pause   │ -p newest-cni-245604 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-997805 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p default-k8s-diff-port-997805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:56 UTC │
	│ delete  │ -p newest-cni-245604                                                                                                                                                                                                                                 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ delete  │ -p newest-cni-245604                                                                                                                                                                                                                                 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ delete  │ -p disable-driver-mounts-234978                                                                                                                                                                                                                      │ disable-driver-mounts-234978 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p embed-certs-386191 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:56 UTC │
	│ image   │ old-k8s-version-992336 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ pause   │ -p old-k8s-version-992336 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ delete  │ -p old-k8s-version-992336                                                                                                                                                                                                                            │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:56 UTC │
	│ delete  │ -p old-k8s-version-992336                                                                                                                                                                                                                            │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ image   │ no-preload-336331 image list --format=json                                                                                                                                                                                                           │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ pause   │ -p no-preload-336331 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │                     │
	│ delete  │ -p no-preload-336331                                                                                                                                                                                                                                 │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ delete  │ -p no-preload-336331                                                                                                                                                                                                                                 │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ addons  │ enable metrics-server -p embed-certs-386191 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │                     │
	│ image   │ default-k8s-diff-port-997805 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ stop    │ -p embed-certs-386191 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │                     │
	│ pause   │ -p default-k8s-diff-port-997805 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:55:49
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:55:49.973376  761851 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:55:49.973479  761851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:55:49.973486  761851 out.go:374] Setting ErrFile to fd 2...
	I1202 20:55:49.973492  761851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:55:49.973784  761851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:55:49.974402  761851 out.go:368] Setting JSON to false
	I1202 20:55:49.976053  761851 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9494,"bootTime":1764699456,"procs":379,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:55:49.976153  761851 start.go:143] virtualization: kvm guest
	I1202 20:55:49.979903  761851 out.go:179] * [embed-certs-386191] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 20:55:49.981563  761851 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:55:49.981711  761851 notify.go:221] Checking for updates...
	I1202 20:55:49.985961  761851 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:55:49.989444  761851 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:49.990856  761851 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 20:55:49.992198  761851 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:55:49.994165  761851 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:55:49.996734  761851 config.go:182] Loaded profile config "default-k8s-diff-port-997805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:55:49.996944  761851 config.go:182] Loaded profile config "no-preload-336331": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:55:49.997173  761851 config.go:182] Loaded profile config "old-k8s-version-992336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1202 20:55:49.997373  761851 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:55:50.033364  761851 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 20:55:50.033467  761851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:55:50.114622  761851 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 20:55:50.101227741 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:55:50.114779  761851 docker.go:319] overlay module found
	I1202 20:55:50.117537  761851 out.go:179] * Using the docker driver based on user configuration
	I1202 20:55:50.119145  761851 start.go:309] selected driver: docker
	I1202 20:55:50.119167  761851 start.go:927] validating driver "docker" against <nil>
	I1202 20:55:50.119183  761851 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:55:50.120035  761851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:55:50.211212  761851 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 20:55:50.198488456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:55:50.211445  761851 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 20:55:50.211790  761851 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:55:50.214433  761851 out.go:179] * Using Docker driver with root privileges
	I1202 20:55:50.218243  761851 cni.go:84] Creating CNI manager for ""
	I1202 20:55:50.218353  761851 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:50.218375  761851 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 20:55:50.218508  761851 start.go:353] cluster config:
	{Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:55:50.220045  761851 out.go:179] * Starting "embed-certs-386191" primary control-plane node in "embed-certs-386191" cluster
	I1202 20:55:50.221707  761851 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 20:55:50.223105  761851 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 20:55:50.224334  761851 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:50.224383  761851 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 20:55:50.224379  761851 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 20:55:50.224423  761851 cache.go:65] Caching tarball of preloaded images
	I1202 20:55:50.224531  761851 preload.go:238] Found /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 20:55:50.224544  761851 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 20:55:50.224682  761851 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/config.json ...
	I1202 20:55:50.224706  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/config.json: {Name:mk4df57c1427e88de36c6d265cf4b7b9447ba4a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:50.254982  761851 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 20:55:50.255008  761851 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 20:55:50.255030  761851 cache.go:243] Successfully downloaded all kic artifacts
	I1202 20:55:50.255092  761851 start.go:360] acquireMachinesLock for embed-certs-386191: {Name:mk07b451c8d7193712ed79603183bf03b141f2ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:55:50.255209  761851 start.go:364] duration metric: took 90.207µs to acquireMachinesLock for "embed-certs-386191"
	I1202 20:55:50.255244  761851 start.go:93] Provisioning new machine with config: &{Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:55:50.255372  761851 start.go:125] createHost starting for "" (driver="docker")
	W1202 20:55:47.478474  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:55:49.480219  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:55:48.658867  759377 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:55:48.658893  759377 machine.go:97] duration metric: took 4.363922202s to provisionDockerMachine
	I1202 20:55:48.658908  759377 start.go:293] postStartSetup for "default-k8s-diff-port-997805" (driver="docker")
	I1202 20:55:48.659934  759377 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:55:48.660266  759377 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:55:48.660319  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:48.684270  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:48.800470  759377 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:55:48.806594  759377 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:55:48.806641  759377 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:55:48.806659  759377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 20:55:48.806723  759377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 20:55:48.806832  759377 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem -> 4110322.pem in /etc/ssl/certs
	I1202 20:55:48.807095  759377 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:55:48.817526  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:55:48.843728  759377 start.go:296] duration metric: took 183.799228ms for postStartSetup
	I1202 20:55:48.843844  759377 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:55:48.843886  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:48.867562  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:48.976679  759377 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:55:48.983737  759377 fix.go:56] duration metric: took 5.130755935s for fixHost
	I1202 20:55:48.983779  759377 start.go:83] releasing machines lock for "default-k8s-diff-port-997805", held for 5.130814844s
	I1202 20:55:48.983853  759377 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-997805
	I1202 20:55:49.008951  759377 ssh_runner.go:195] Run: cat /version.json
	I1202 20:55:49.009046  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:49.009048  759377 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:55:49.009136  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:49.034693  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:49.035313  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:49.217584  759377 ssh_runner.go:195] Run: systemctl --version
	I1202 20:55:49.226948  759377 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:55:49.280525  759377 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:55:49.287579  759377 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:55:49.287663  759377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:55:49.299593  759377 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 20:55:49.299624  759377 start.go:496] detecting cgroup driver to use...
	I1202 20:55:49.299667  759377 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 20:55:49.299717  759377 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:55:49.321346  759377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:55:49.340202  759377 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:55:49.340276  759377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:55:49.364580  759377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:55:49.384570  759377 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:55:49.507838  759377 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:55:49.636982  759377 docker.go:234] disabling docker service ...
	I1202 20:55:49.637124  759377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:55:49.660429  759377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:55:49.676580  759377 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:55:49.805919  759377 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:55:49.932552  759377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:55:49.950808  759377 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:55:49.973269  759377 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:55:49.973378  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:49.987382  759377 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 20:55:49.987446  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.001518  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.015622  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.029383  759377 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:55:50.042396  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.055622  759377 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.069706  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.082027  759377 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:55:50.093878  759377 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:55:50.106172  759377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:50.241651  759377 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:55:51.093615  759377 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:55:51.093712  759377 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:55:51.098803  759377 start.go:564] Will wait 60s for crictl version
	I1202 20:55:51.098893  759377 ssh_runner.go:195] Run: which crictl
	I1202 20:55:51.103616  759377 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:55:51.134275  759377 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:55:51.134365  759377 ssh_runner.go:195] Run: crio --version
	I1202 20:55:51.176508  759377 ssh_runner.go:195] Run: crio --version
	I1202 20:55:51.212619  759377 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 20:55:51.213954  759377 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-997805 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:55:51.239456  759377 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 20:55:51.247008  759377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:51.258836  759377 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-997805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-997805 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:55:51.259035  759377 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:51.259113  759377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:51.305184  759377 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:51.305211  759377 crio.go:433] Images already preloaded, skipping extraction
	I1202 20:55:51.305279  759377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:51.336679  759377 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:51.336721  759377 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:55:51.336736  759377 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1202 20:55:51.336850  759377 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-997805 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-997805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:55:51.336915  759377 ssh_runner.go:195] Run: crio config
	I1202 20:55:51.395485  759377 cni.go:84] Creating CNI manager for ""
	I1202 20:55:51.395526  759377 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:51.395553  759377 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:55:51.395590  759377 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-997805 NodeName:default-k8s-diff-port-997805 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:55:51.395786  759377 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-997805"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:55:51.395870  759377 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 20:55:51.406735  759377 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:55:51.406822  759377 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:55:51.416228  759377 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1202 20:55:51.430748  759377 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:55:51.448244  759377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1202 20:55:51.463482  759377 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 20:55:51.467906  759377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:51.480393  759377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:51.588830  759377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:55:51.618253  759377 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805 for IP: 192.168.85.2
	I1202 20:55:51.618282  759377 certs.go:195] generating shared ca certs ...
	I1202 20:55:51.618303  759377 certs.go:227] acquiring lock for ca certs: {Name:mkea11924193dd4dc459e16d70207bde383196df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:51.618470  759377 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key
	I1202 20:55:51.618534  759377 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key
	I1202 20:55:51.618547  759377 certs.go:257] generating profile certs ...
	I1202 20:55:51.618661  759377 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/client.key
	I1202 20:55:51.618759  759377 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/apiserver.key.36ffc693
	I1202 20:55:51.618817  759377 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/proxy-client.key
	I1202 20:55:51.618958  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem (1338 bytes)
	W1202 20:55:51.619000  759377 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032_empty.pem, impossibly tiny 0 bytes
	I1202 20:55:51.619010  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 20:55:51.619043  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:55:51.619087  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:55:51.619120  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem (1675 bytes)
	I1202 20:55:51.619173  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:55:51.619958  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:55:51.642775  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:55:51.668086  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:55:51.695111  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:55:51.723055  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 20:55:51.757108  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 20:55:51.782582  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:55:51.803028  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 20:55:51.823897  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:55:51.845621  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem --> /usr/share/ca-certificates/411032.pem (1338 bytes)
	I1202 20:55:51.866855  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /usr/share/ca-certificates/4110322.pem (1708 bytes)
	I1202 20:55:51.890515  759377 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:55:51.906355  759377 ssh_runner.go:195] Run: openssl version
	I1202 20:55:51.914259  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110322.pem && ln -fs /usr/share/ca-certificates/4110322.pem /etc/ssl/certs/4110322.pem"
	I1202 20:55:51.925148  759377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110322.pem
	I1202 20:55:51.929800  759377 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 20:13 /usr/share/ca-certificates/4110322.pem
	I1202 20:55:51.929869  759377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110322.pem
	I1202 20:55:51.972279  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4110322.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:55:51.983418  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:55:51.993784  759377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:51.999249  759377 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:51.999316  759377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:52.049373  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:55:52.061515  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/411032.pem && ln -fs /usr/share/ca-certificates/411032.pem /etc/ssl/certs/411032.pem"
	I1202 20:55:52.072126  759377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/411032.pem
	I1202 20:55:52.076862  759377 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 20:13 /usr/share/ca-certificates/411032.pem
	I1202 20:55:52.076956  759377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/411032.pem
	I1202 20:55:52.126642  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/411032.pem /etc/ssl/certs/51391683.0"
	I1202 20:55:52.138458  759377 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:55:52.143543  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 20:55:52.198225  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 20:55:52.254754  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 20:55:52.319722  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 20:55:52.380903  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 20:55:52.422910  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 20:55:52.483325  759377 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-997805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-997805 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:55:52.483438  759377 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:55:52.483499  759377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:55:52.522620  759377 cri.go:89] found id: "25e14e8feafb6c0d6c5261cd5e507b812e39fcb9c7e196408fe69d780ebbcd1d"
	I1202 20:55:52.522651  759377 cri.go:89] found id: "0c7e2844e2dbdbf5b9ffe8bf4e8d07304b64b059e3d4c965c2010c5d8a39c499"
	I1202 20:55:52.522657  759377 cri.go:89] found id: "81b0ec87511a05a7501d98eb27c52f69372a4b30c4ea523db262c140f9b68cd3"
	I1202 20:55:52.522662  759377 cri.go:89] found id: "e13e6c4d6c5da602ac2e1402a7612205c5a0ceffdccf7618da3035e562a7d9d3"
	I1202 20:55:52.522667  759377 cri.go:89] found id: ""
	I1202 20:55:52.522718  759377 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 20:55:52.539274  759377 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:55:52Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:55:52.539358  759377 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:55:52.550759  759377 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 20:55:52.550911  759377 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 20:55:52.550977  759377 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 20:55:52.562444  759377 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:55:52.563380  759377 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-997805" does not appear in /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:52.563867  759377 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-407427/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-997805" cluster setting kubeconfig missing "default-k8s-diff-port-997805" context setting]
	I1202 20:55:52.564708  759377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:52.567122  759377 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 20:55:52.580423  759377 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1202 20:55:52.580475  759377 kubeadm.go:602] duration metric: took 29.545337ms to restartPrimaryControlPlane
	I1202 20:55:52.580492  759377 kubeadm.go:403] duration metric: took 97.179033ms to StartCluster
	I1202 20:55:52.580515  759377 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:52.580624  759377 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:52.582395  759377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:52.582737  759377 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:55:52.582982  759377 config.go:182] Loaded profile config "default-k8s-diff-port-997805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:55:52.583044  759377 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:55:52.583145  759377 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:52.583167  759377 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-997805"
	W1202 20:55:52.583180  759377 addons.go:248] addon storage-provisioner should already be in state true
	I1202 20:55:52.583208  759377 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:52.583706  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.583924  759377 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:52.583949  759377 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-997805"
	W1202 20:55:52.583958  759377 addons.go:248] addon dashboard should already be in state true
	I1202 20:55:52.583987  759377 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:52.584470  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.584621  759377 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:52.584638  759377 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-997805"
	I1202 20:55:52.584909  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.590138  759377 out.go:179] * Verifying Kubernetes components...
	I1202 20:55:52.591985  759377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:52.621520  759377 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-997805"
	W1202 20:55:52.621550  759377 addons.go:248] addon default-storageclass should already be in state true
	I1202 20:55:52.621581  759377 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:52.621962  759377 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 20:55:52.621973  759377 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:55:52.622100  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.623522  759377 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:52.623542  759377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:55:52.623861  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:52.629794  759377 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 20:55:52.631326  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 20:55:52.631354  759377 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 20:55:52.631441  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:52.650454  759377 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:52.650440  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:52.650477  759377 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:55:52.650539  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:52.664697  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:52.687593  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:52.782783  759377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:55:52.788136  759377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:52.796186  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 20:55:52.796227  759377 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 20:55:52.805245  759377 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-997805" to be "Ready" ...
	I1202 20:55:52.813493  759377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:52.816061  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 20:55:52.816120  759377 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 20:55:52.836609  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 20:55:52.836641  759377 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 20:55:52.858664  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 20:55:52.858695  759377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 20:55:52.881817  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 20:55:52.881850  759377 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 20:55:52.898249  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 20:55:52.898282  759377 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 20:55:52.916317  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 20:55:52.916341  759377 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 20:55:52.934311  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 20:55:52.934421  759377 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 20:55:52.954130  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:55:52.954156  759377 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 20:55:52.971994  759377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:55:50.259730  761851 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1202 20:55:50.260957  761851 start.go:159] libmachine.API.Create for "embed-certs-386191" (driver="docker")
	I1202 20:55:50.261018  761851 client.go:173] LocalClient.Create starting
	I1202 20:55:50.261131  761851 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem
	I1202 20:55:50.261175  761851 main.go:143] libmachine: Decoding PEM data...
	I1202 20:55:50.261199  761851 main.go:143] libmachine: Parsing certificate...
	I1202 20:55:50.261293  761851 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem
	I1202 20:55:50.261321  761851 main.go:143] libmachine: Decoding PEM data...
	I1202 20:55:50.261336  761851 main.go:143] libmachine: Parsing certificate...
	I1202 20:55:50.261828  761851 cli_runner.go:164] Run: docker network inspect embed-certs-386191 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 20:55:50.287353  761851 cli_runner.go:211] docker network inspect embed-certs-386191 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 20:55:50.287436  761851 network_create.go:284] running [docker network inspect embed-certs-386191] to gather additional debugging logs...
	I1202 20:55:50.287467  761851 cli_runner.go:164] Run: docker network inspect embed-certs-386191
	W1202 20:55:50.313420  761851 cli_runner.go:211] docker network inspect embed-certs-386191 returned with exit code 1
	I1202 20:55:50.313458  761851 network_create.go:287] error running [docker network inspect embed-certs-386191]: docker network inspect embed-certs-386191: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-386191 not found
	I1202 20:55:50.313493  761851 network_create.go:289] output of [docker network inspect embed-certs-386191]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-386191 not found
	
	** /stderr **
	I1202 20:55:50.313695  761851 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:55:50.339597  761851 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-acf081edf266 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:04:c0:60:47:62} reservation:<nil>}
	I1202 20:55:50.340759  761851 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9623a21fb225 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:fc:8b:40:15:1b} reservation:<nil>}
	I1202 20:55:50.341559  761851 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f2b79e7e26a5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:c7:f4:38:1c:32} reservation:<nil>}
	I1202 20:55:50.342581  761851 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-be4fb772701b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:87:5f:38:96:b7} reservation:<nil>}
	I1202 20:55:50.343861  761851 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-13fe483902b9 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:a2:a4:21:b2:62:5a} reservation:<nil>}
	I1202 20:55:50.344785  761851 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-65ab470fa0e2 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:16:23:28:7c:c5:24} reservation:<nil>}
	I1202 20:55:50.346012  761851 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fb66a0}
	I1202 20:55:50.346044  761851 network_create.go:124] attempt to create docker network embed-certs-386191 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1202 20:55:50.346142  761851 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-386191 embed-certs-386191
	I1202 20:55:50.449757  761851 network_create.go:108] docker network embed-certs-386191 192.168.103.0/24 created
	I1202 20:55:50.449812  761851 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-386191" container
	I1202 20:55:50.449912  761851 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 20:55:50.476319  761851 cli_runner.go:164] Run: docker volume create embed-certs-386191 --label name.minikube.sigs.k8s.io=embed-certs-386191 --label created_by.minikube.sigs.k8s.io=true
	I1202 20:55:50.544287  761851 oci.go:103] Successfully created a docker volume embed-certs-386191
	I1202 20:55:50.544384  761851 cli_runner.go:164] Run: docker run --rm --name embed-certs-386191-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-386191 --entrypoint /usr/bin/test -v embed-certs-386191:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 20:55:51.390297  761851 oci.go:107] Successfully prepared a docker volume embed-certs-386191
	I1202 20:55:51.390398  761851 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:51.390416  761851 kic.go:194] Starting extracting preloaded images to volume ...
	I1202 20:55:51.390490  761851 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-386191:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	W1202 20:55:51.979014  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:55:54.048006  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:55:54.222552  759377 node_ready.go:49] node "default-k8s-diff-port-997805" is "Ready"
	I1202 20:55:54.222597  759377 node_ready.go:38] duration metric: took 1.417304277s for node "default-k8s-diff-port-997805" to be "Ready" ...
	I1202 20:55:54.222616  759377 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:55:54.222680  759377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:55:55.521273  759377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.733090646s)
	I1202 20:55:55.521348  759377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.707827699s)
	I1202 20:55:55.956240  759377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.984189677s)
	I1202 20:55:55.956260  759377 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.733551247s)
	I1202 20:55:55.956296  759377 api_server.go:72] duration metric: took 3.373517458s to wait for apiserver process to appear ...
	I1202 20:55:55.956305  759377 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:55:55.956329  759377 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1202 20:55:55.957591  759377 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-997805 addons enable metrics-server
	
	I1202 20:55:55.960080  759377 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1202 20:55:55.961425  759377 addons.go:530] duration metric: took 3.378380909s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1202 20:55:55.963108  759377 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 20:55:55.963149  759377 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 20:55:56.456815  759377 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1202 20:55:56.464867  759377 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1202 20:55:56.466374  759377 api_server.go:141] control plane version: v1.34.2
	I1202 20:55:56.466405  759377 api_server.go:131] duration metric: took 510.092ms to wait for apiserver health ...
	I1202 20:55:56.466417  759377 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:55:56.470286  759377 system_pods.go:59] 8 kube-system pods found
	I1202 20:55:56.470321  759377 system_pods.go:61] "coredns-66bc5c9577-jrln7" [37de7399-6357-4f08-9240-fc9e0d884f47] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:55:56.470336  759377 system_pods.go:61] "etcd-default-k8s-diff-port-997805" [446321a2-6abf-495a-8198-da2e43d8c18d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:55:56.470354  759377 system_pods.go:61] "kindnet-rzqpn" [eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 20:55:56.470364  759377 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-997805" [04c7ea7b-9b4e-44ee-8148-107daccf6b7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:55:56.470376  759377 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-997805" [65af3257-fa45-4d8c-bab3-46c0390ab8df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:55:56.470395  759377 system_pods.go:61] "kube-proxy-s2jpn" [407f6b3c-8d8b-47b0-b994-c061eedc6420] Running
	I1202 20:55:56.470403  759377 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-997805" [803a952f-54ba-4326-b70f-907fc7db368e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:55:56.470411  759377 system_pods.go:61] "storage-provisioner" [08893b97-1192-4f4e-8636-8f2ba82c853d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:55:56.470419  759377 system_pods.go:74] duration metric: took 3.994668ms to wait for pod list to return data ...
	I1202 20:55:56.470434  759377 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:55:56.472796  759377 default_sa.go:45] found service account: "default"
	I1202 20:55:56.472821  759377 default_sa.go:55] duration metric: took 2.376879ms for default service account to be created ...
	I1202 20:55:56.472832  759377 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 20:55:56.476530  759377 system_pods.go:86] 8 kube-system pods found
	I1202 20:55:56.476568  759377 system_pods.go:89] "coredns-66bc5c9577-jrln7" [37de7399-6357-4f08-9240-fc9e0d884f47] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:55:56.476586  759377 system_pods.go:89] "etcd-default-k8s-diff-port-997805" [446321a2-6abf-495a-8198-da2e43d8c18d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:55:56.476598  759377 system_pods.go:89] "kindnet-rzqpn" [eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 20:55:56.476611  759377 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-997805" [04c7ea7b-9b4e-44ee-8148-107daccf6b7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:55:56.476622  759377 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-997805" [65af3257-fa45-4d8c-bab3-46c0390ab8df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:55:56.476636  759377 system_pods.go:89] "kube-proxy-s2jpn" [407f6b3c-8d8b-47b0-b994-c061eedc6420] Running
	I1202 20:55:56.476644  759377 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-997805" [803a952f-54ba-4326-b70f-907fc7db368e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:55:56.476652  759377 system_pods.go:89] "storage-provisioner" [08893b97-1192-4f4e-8636-8f2ba82c853d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:55:56.476666  759377 system_pods.go:126] duration metric: took 3.826088ms to wait for k8s-apps to be running ...
	I1202 20:55:56.476679  759377 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 20:55:56.476731  759377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:55:56.496595  759377 system_svc.go:56] duration metric: took 19.904103ms WaitForService to wait for kubelet
	I1202 20:55:56.496628  759377 kubeadm.go:587] duration metric: took 3.913848958s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:55:56.496651  759377 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:55:56.501320  759377 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 20:55:56.501357  759377 node_conditions.go:123] node cpu capacity is 8
	I1202 20:55:56.501378  759377 node_conditions.go:105] duration metric: took 4.719966ms to run NodePressure ...
	I1202 20:55:56.501394  759377 start.go:242] waiting for startup goroutines ...
	I1202 20:55:56.501406  759377 start.go:247] waiting for cluster config update ...
	I1202 20:55:56.501422  759377 start.go:256] writing updated cluster config ...
	I1202 20:55:56.501764  759377 ssh_runner.go:195] Run: rm -f paused
	I1202 20:55:56.507506  759377 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:55:56.511978  759377 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jrln7" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 20:55:58.518638  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:55:55.882395  761851 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-386191:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.491855191s)
	I1202 20:55:55.882432  761851 kic.go:203] duration metric: took 4.49201135s to extract preloaded images to volume ...
	W1202 20:55:55.882649  761851 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1202 20:55:55.882730  761851 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1202 20:55:55.882796  761851 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 20:55:55.970786  761851 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-386191 --name embed-certs-386191 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-386191 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-386191 --network embed-certs-386191 --ip 192.168.103.2 --volume embed-certs-386191:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 20:55:56.322797  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Running}}
	I1202 20:55:56.346318  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:55:56.369508  761851 cli_runner.go:164] Run: docker exec embed-certs-386191 stat /var/lib/dpkg/alternatives/iptables
	I1202 20:55:56.426161  761851 oci.go:144] the created container "embed-certs-386191" has a running status.
	I1202 20:55:56.426198  761851 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa...
	I1202 20:55:56.605690  761851 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 20:55:56.639247  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:55:56.661049  761851 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 20:55:56.661086  761851 kic_runner.go:114] Args: [docker exec --privileged embed-certs-386191 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 20:55:56.743919  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:55:56.771200  761851 machine.go:94] provisionDockerMachine start ...
	I1202 20:55:56.771338  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:56.796209  761851 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:56.796568  761851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33513 <nil> <nil>}
	I1202 20:55:56.796593  761851 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:55:56.950615  761851 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-386191
	
	I1202 20:55:56.950657  761851 ubuntu.go:182] provisioning hostname "embed-certs-386191"
	I1202 20:55:56.950733  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:56.973211  761851 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:56.973537  761851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33513 <nil> <nil>}
	I1202 20:55:56.973561  761851 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-386191 && echo "embed-certs-386191" | sudo tee /etc/hostname
	I1202 20:55:57.141391  761851 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-386191
	
	I1202 20:55:57.141500  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:57.162911  761851 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:57.163198  761851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33513 <nil> <nil>}
	I1202 20:55:57.163228  761851 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-386191' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-386191/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-386191' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:55:57.310513  761851 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:55:57.310553  761851 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-407427/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-407427/.minikube}
	I1202 20:55:57.310589  761851 ubuntu.go:190] setting up certificates
	I1202 20:55:57.310609  761851 provision.go:84] configureAuth start
	I1202 20:55:57.310699  761851 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-386191
	I1202 20:55:57.331293  761851 provision.go:143] copyHostCerts
	I1202 20:55:57.331361  761851 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem, removing ...
	I1202 20:55:57.331377  761851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem
	I1202 20:55:57.331457  761851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem (1123 bytes)
	I1202 20:55:57.331608  761851 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem, removing ...
	I1202 20:55:57.331619  761851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem
	I1202 20:55:57.331661  761851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem (1675 bytes)
	I1202 20:55:57.331806  761851 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem, removing ...
	I1202 20:55:57.331820  761851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem
	I1202 20:55:57.331861  761851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem (1082 bytes)
	I1202 20:55:57.331969  761851 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem org=jenkins.embed-certs-386191 san=[127.0.0.1 192.168.103.2 embed-certs-386191 localhost minikube]
	I1202 20:55:57.478343  761851 provision.go:177] copyRemoteCerts
	I1202 20:55:57.478412  761851 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:55:57.478461  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:57.503684  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:55:57.613653  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:55:57.638025  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1202 20:55:57.660295  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 20:55:57.684474  761851 provision.go:87] duration metric: took 373.842939ms to configureAuth
	I1202 20:55:57.684512  761851 ubuntu.go:206] setting minikube options for container-runtime
	I1202 20:55:57.684722  761851 config.go:182] Loaded profile config "embed-certs-386191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:55:57.684859  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:57.705791  761851 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:57.706104  761851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33513 <nil> <nil>}
	I1202 20:55:57.706127  761851 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:55:58.017837  761851 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:55:58.017867  761851 machine.go:97] duration metric: took 1.246644154s to provisionDockerMachine
	I1202 20:55:58.017881  761851 client.go:176] duration metric: took 7.756854866s to LocalClient.Create
	I1202 20:55:58.017904  761851 start.go:167] duration metric: took 7.756953433s to libmachine.API.Create "embed-certs-386191"
	I1202 20:55:58.017914  761851 start.go:293] postStartSetup for "embed-certs-386191" (driver="docker")
	I1202 20:55:58.017926  761851 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:55:58.017993  761851 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:55:58.018051  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:58.040966  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:55:58.164646  761851 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:55:58.169173  761851 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:55:58.169218  761851 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:55:58.169234  761851 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 20:55:58.169292  761851 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 20:55:58.169398  761851 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem -> 4110322.pem in /etc/ssl/certs
	I1202 20:55:58.169534  761851 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:55:58.178343  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:55:58.201537  761851 start.go:296] duration metric: took 183.605841ms for postStartSetup
	I1202 20:55:58.201980  761851 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-386191
	I1202 20:55:58.222381  761851 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/config.json ...
	I1202 20:55:58.222725  761851 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:55:58.222779  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:58.246974  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:55:58.349308  761851 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:55:58.354335  761851 start.go:128] duration metric: took 8.098942472s to createHost
	I1202 20:55:58.354367  761851 start.go:83] releasing machines lock for "embed-certs-386191", held for 8.099141281s
	I1202 20:55:58.354452  761851 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-386191
	I1202 20:55:58.375692  761851 ssh_runner.go:195] Run: cat /version.json
	I1202 20:55:58.375743  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:58.375778  761851 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:55:58.375875  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:58.399444  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:55:58.401096  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:55:58.567709  761851 ssh_runner.go:195] Run: systemctl --version
	I1202 20:55:58.576291  761851 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:55:58.616262  761851 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:55:58.621961  761851 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:55:58.622044  761851 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:55:58.651183  761851 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 20:55:58.651217  761851 start.go:496] detecting cgroup driver to use...
	I1202 20:55:58.651265  761851 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 20:55:58.651331  761851 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:55:58.670441  761851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:55:58.684478  761851 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:55:58.684542  761851 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:55:58.704480  761851 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:55:58.725624  761851 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:55:58.831744  761851 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:55:58.927526  761851 docker.go:234] disabling docker service ...
	I1202 20:55:58.927588  761851 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:55:58.947085  761851 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:55:58.961716  761851 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:55:59.059830  761851 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:55:59.155836  761851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:55:59.170575  761851 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:55:59.187647  761851 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:55:59.187711  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.199691  761851 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 20:55:59.199752  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.210377  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.221666  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.233039  761851 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:55:59.242836  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.252564  761851 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.268580  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.279302  761851 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:55:59.288550  761851 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:55:59.297166  761851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:59.384478  761851 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:55:59.534012  761851 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:55:59.534100  761851 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:55:59.538865  761851 start.go:564] Will wait 60s for crictl version
	I1202 20:55:59.538929  761851 ssh_runner.go:195] Run: which crictl
	I1202 20:55:59.542822  761851 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:55:59.570175  761851 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:55:59.570275  761851 ssh_runner.go:195] Run: crio --version
	I1202 20:55:59.600365  761851 ssh_runner.go:195] Run: crio --version
	I1202 20:55:59.632281  761851 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 20:55:59.633569  761851 cli_runner.go:164] Run: docker network inspect embed-certs-386191 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:55:59.653989  761851 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1202 20:55:59.659705  761851 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:59.673939  761851 kubeadm.go:884] updating cluster {Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:55:59.674148  761851 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:59.674231  761851 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:59.721572  761851 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:59.721623  761851 crio.go:433] Images already preloaded, skipping extraction
	I1202 20:55:59.721807  761851 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:59.763726  761851 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:59.763753  761851 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:55:59.763763  761851 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1202 20:55:59.763877  761851 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-386191 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:55:59.763974  761851 ssh_runner.go:195] Run: crio config
	I1202 20:55:59.830764  761851 cni.go:84] Creating CNI manager for ""
	I1202 20:55:59.830790  761851 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:59.830809  761851 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:55:59.830832  761851 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-386191 NodeName:embed-certs-386191 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:55:59.830950  761851 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-386191"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:55:59.831035  761851 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 20:55:59.841880  761851 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:55:59.841954  761851 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:55:59.852027  761851 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1202 20:55:59.869099  761851 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:55:59.889821  761851 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1202 20:55:59.907811  761851 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1202 20:55:59.913347  761851 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:59.927373  761851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W1202 20:55:56.478639  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:55:58.978346  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:56:00.050556  761851 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:56:00.077300  761851 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191 for IP: 192.168.103.2
	I1202 20:56:00.077325  761851 certs.go:195] generating shared ca certs ...
	I1202 20:56:00.077348  761851 certs.go:227] acquiring lock for ca certs: {Name:mkea11924193dd4dc459e16d70207bde383196df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.077530  761851 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key
	I1202 20:56:00.077575  761851 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key
	I1202 20:56:00.077588  761851 certs.go:257] generating profile certs ...
	I1202 20:56:00.077664  761851 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.key
	I1202 20:56:00.077682  761851 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.crt with IP's: []
	I1202 20:56:00.252632  761851 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.crt ...
	I1202 20:56:00.252663  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.crt: {Name:mk9d10e4646efb676095250174819771b143a8ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.252877  761851 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.key ...
	I1202 20:56:00.252896  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.key: {Name:mk09798c33ea1ea9f8eb08ebf47349e244c0760e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.253023  761851 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key.1b423d29
	I1202 20:56:00.253048  761851 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt.1b423d29 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1202 20:56:00.432017  761851 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt.1b423d29 ...
	I1202 20:56:00.432052  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt.1b423d29: {Name:mk6d91134ec48be46c0e886b478e71e1794c3cdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.432278  761851 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key.1b423d29 ...
	I1202 20:56:00.432302  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key.1b423d29: {Name:mk97fa0403fe534a503bf999364704991b597622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.432413  761851 certs.go:382] copying /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt.1b423d29 -> /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt
	I1202 20:56:00.432512  761851 certs.go:386] copying /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key.1b423d29 -> /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key
	I1202 20:56:00.432593  761851 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.key
	I1202 20:56:00.432619  761851 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.crt with IP's: []
	I1202 20:56:00.527766  761851 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.crt ...
	I1202 20:56:00.527802  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.crt: {Name:mke9848302a1327d00a26fb35bc8d56284a1ca08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.528029  761851 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.key ...
	I1202 20:56:00.528053  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.key: {Name:mk5b412430aa6855d80ede6a2641ba2256c9a484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.528324  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem (1338 bytes)
	W1202 20:56:00.528374  761851 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032_empty.pem, impossibly tiny 0 bytes
	I1202 20:56:00.528390  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 20:56:00.528423  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:56:00.528455  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:56:00.528493  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem (1675 bytes)
	I1202 20:56:00.528552  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:56:00.529432  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:56:00.554691  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:56:00.580499  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:56:00.606002  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:56:00.630389  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1202 20:56:00.655553  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 20:56:00.679419  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:56:00.704325  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 20:56:00.729255  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /usr/share/ca-certificates/4110322.pem (1708 bytes)
	I1202 20:56:00.757910  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:56:00.782959  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem --> /usr/share/ca-certificates/411032.pem (1338 bytes)
	I1202 20:56:00.808564  761851 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:56:00.828291  761851 ssh_runner.go:195] Run: openssl version
	I1202 20:56:00.836796  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110322.pem && ln -fs /usr/share/ca-certificates/4110322.pem /etc/ssl/certs/4110322.pem"
	I1202 20:56:00.848469  761851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110322.pem
	I1202 20:56:00.853715  761851 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 20:13 /usr/share/ca-certificates/4110322.pem
	I1202 20:56:00.853790  761851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110322.pem
	I1202 20:56:00.905576  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4110322.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:56:00.918463  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:56:00.930339  761851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:56:00.935452  761851 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:56:00.935522  761851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:56:00.990051  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:56:01.002960  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/411032.pem && ln -fs /usr/share/ca-certificates/411032.pem /etc/ssl/certs/411032.pem"
	I1202 20:56:01.013994  761851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/411032.pem
	I1202 20:56:01.019737  761851 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 20:13 /usr/share/ca-certificates/411032.pem
	I1202 20:56:01.019798  761851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/411032.pem
	I1202 20:56:01.062700  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/411032.pem /etc/ssl/certs/51391683.0"
	I1202 20:56:01.074487  761851 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:56:01.079958  761851 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 20:56:01.080033  761851 kubeadm.go:401] StartCluster: {Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:56:01.080164  761851 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:56:01.080231  761851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:56:01.119713  761851 cri.go:89] found id: ""
	I1202 20:56:01.122354  761851 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:56:01.160024  761851 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 20:56:01.174466  761851 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 20:56:01.174517  761851 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 20:56:01.186198  761851 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 20:56:01.186294  761851 kubeadm.go:158] found existing configuration files:
	
	I1202 20:56:01.186361  761851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 20:56:01.201548  761851 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 20:56:01.201623  761851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 20:56:01.214153  761851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 20:56:01.225107  761851 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 20:56:01.225225  761851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 20:56:01.236050  761851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 20:56:01.247714  761851 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 20:56:01.247785  761851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 20:56:01.259129  761851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 20:56:01.270914  761851 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 20:56:01.270981  761851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 20:56:01.283320  761851 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 20:56:01.344042  761851 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1202 20:56:01.344150  761851 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 20:56:01.374696  761851 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 20:56:01.374786  761851 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1202 20:56:01.374832  761851 kubeadm.go:319] OS: Linux
	I1202 20:56:01.374904  761851 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 20:56:01.374965  761851 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 20:56:01.375027  761851 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 20:56:01.375100  761851 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 20:56:01.375165  761851 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 20:56:01.375227  761851 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 20:56:01.375295  761851 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 20:56:01.375351  761851 kubeadm.go:319] CGROUPS_IO: enabled
	I1202 20:56:01.461671  761851 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 20:56:01.461847  761851 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 20:56:01.462101  761851 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 20:56:01.473475  761851 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1202 20:56:00.519234  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:03.019288  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:56:01.478718  761851 out.go:252]   - Generating certificates and keys ...
	I1202 20:56:01.478829  761851 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 20:56:01.478911  761851 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 20:56:01.668758  761851 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 20:56:01.829895  761851 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 20:56:02.005376  761851 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 20:56:02.862909  761851 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 20:56:03.307052  761851 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 20:56:03.307703  761851 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-386191 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1202 20:56:03.383959  761851 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 20:56:03.384496  761851 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-386191 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1202 20:56:03.508307  761851 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 20:56:04.670556  761851 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 20:56:04.823930  761851 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 20:56:04.824007  761851 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	W1202 20:56:00.979309  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:56:02.980313  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:56:05.478729  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:56:05.205466  761851 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 20:56:05.375427  761851 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 20:56:05.434193  761851 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 20:56:05.863197  761851 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 20:56:06.053990  761851 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 20:56:06.054504  761851 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 20:56:06.058651  761851 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1202 20:56:05.517785  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:07.518439  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:56:06.060126  761851 out.go:252]   - Booting up control plane ...
	I1202 20:56:06.060244  761851 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 20:56:06.060364  761851 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 20:56:06.061268  761851 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 20:56:06.095037  761851 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 20:56:06.095189  761851 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 20:56:06.102515  761851 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 20:56:06.102696  761851 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 20:56:06.102769  761851 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 20:56:06.205490  761851 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 20:56:06.205715  761851 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 20:56:07.205674  761851 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001810301s
	I1202 20:56:07.209848  761851 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 20:56:07.210052  761851 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1202 20:56:07.210217  761851 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 20:56:07.210338  761851 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1202 20:56:08.756010  761851 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.546069674s
	I1202 20:56:09.869674  761851 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.659323153s
	W1202 20:56:07.979740  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:56:10.478689  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:56:11.711917  761851 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502061899s
	I1202 20:56:11.728157  761851 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 20:56:11.740906  761851 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 20:56:11.753231  761851 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 20:56:11.753530  761851 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-386191 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 20:56:11.764705  761851 kubeadm.go:319] [bootstrap-token] Using token: c8uju2.57r80hlp0isn29k2
	I1202 20:56:11.766183  761851 out.go:252]   - Configuring RBAC rules ...
	I1202 20:56:11.766339  761851 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 20:56:11.770506  761851 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 20:56:11.777525  761851 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 20:56:11.780772  761851 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 20:56:11.785459  761851 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 20:56:11.788963  761851 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 20:56:12.119080  761851 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 20:56:12.539952  761851 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 20:56:13.118875  761851 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 20:56:13.119856  761851 kubeadm.go:319] 
	I1202 20:56:13.119972  761851 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 20:56:13.119991  761851 kubeadm.go:319] 
	I1202 20:56:13.120096  761851 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 20:56:13.120106  761851 kubeadm.go:319] 
	I1202 20:56:13.120132  761851 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 20:56:13.120189  761851 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 20:56:13.120239  761851 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 20:56:13.120250  761851 kubeadm.go:319] 
	I1202 20:56:13.120296  761851 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 20:56:13.120303  761851 kubeadm.go:319] 
	I1202 20:56:13.120350  761851 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 20:56:13.120356  761851 kubeadm.go:319] 
	I1202 20:56:13.120405  761851 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 20:56:13.120480  761851 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 20:56:13.120550  761851 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 20:56:13.120559  761851 kubeadm.go:319] 
	I1202 20:56:13.120655  761851 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 20:56:13.120760  761851 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 20:56:13.120770  761851 kubeadm.go:319] 
	I1202 20:56:13.120947  761851 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token c8uju2.57r80hlp0isn29k2 \
	I1202 20:56:13.121116  761851 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:48349ffa2369b87cd15480ece56edb1c5c5d97a828930f9dfbf1d1ceccf80ff4 \
	I1202 20:56:13.121150  761851 kubeadm.go:319] 	--control-plane 
	I1202 20:56:13.121158  761851 kubeadm.go:319] 
	I1202 20:56:13.121277  761851 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 20:56:13.121292  761851 kubeadm.go:319] 
	I1202 20:56:13.121403  761851 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token c8uju2.57r80hlp0isn29k2 \
	I1202 20:56:13.121546  761851 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:48349ffa2369b87cd15480ece56edb1c5c5d97a828930f9dfbf1d1ceccf80ff4 
	I1202 20:56:13.124563  761851 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1202 20:56:13.124664  761851 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 20:56:13.124688  761851 cni.go:84] Creating CNI manager for ""
	I1202 20:56:13.124700  761851 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:56:13.126500  761851 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1202 20:56:10.017702  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:12.018270  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:56:13.128206  761851 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 20:56:13.133011  761851 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1202 20:56:13.133036  761851 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 20:56:13.147210  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 20:56:13.367880  761851 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 20:56:13.368008  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:13.368037  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-386191 minikube.k8s.io/updated_at=2025_12_02T20_56_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92 minikube.k8s.io/name=embed-certs-386191 minikube.k8s.io/primary=true
	I1202 20:56:13.378170  761851 ops.go:34] apiserver oom_adj: -16
	I1202 20:56:13.456213  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:13.956791  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:14.456911  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:14.957002  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1202 20:56:12.481885  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:56:14.478647  754876 pod_ready.go:94] pod "coredns-7d764666f9-ghxk6" is "Ready"
	I1202 20:56:14.478679  754876 pod_ready.go:86] duration metric: took 33.50633852s for pod "coredns-7d764666f9-ghxk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.481510  754876 pod_ready.go:83] waiting for pod "etcd-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.487252  754876 pod_ready.go:94] pod "etcd-no-preload-336331" is "Ready"
	I1202 20:56:14.487284  754876 pod_ready.go:86] duration metric: took 5.742661ms for pod "etcd-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.489709  754876 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.493975  754876 pod_ready.go:94] pod "kube-apiserver-no-preload-336331" is "Ready"
	I1202 20:56:14.494030  754876 pod_ready.go:86] duration metric: took 4.293005ms for pod "kube-apiserver-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.496555  754876 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.676017  754876 pod_ready.go:94] pod "kube-controller-manager-no-preload-336331" is "Ready"
	I1202 20:56:14.676054  754876 pod_ready.go:86] duration metric: took 179.468852ms for pod "kube-controller-manager-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.876507  754876 pod_ready.go:83] waiting for pod "kube-proxy-qc2v9" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:15.276156  754876 pod_ready.go:94] pod "kube-proxy-qc2v9" is "Ready"
	I1202 20:56:15.276184  754876 pod_ready.go:86] duration metric: took 399.652639ms for pod "kube-proxy-qc2v9" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:15.476929  754876 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:15.876785  754876 pod_ready.go:94] pod "kube-scheduler-no-preload-336331" is "Ready"
	I1202 20:56:15.876821  754876 pod_ready.go:86] duration metric: took 399.859554ms for pod "kube-scheduler-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:15.876837  754876 pod_ready.go:40] duration metric: took 34.909444308s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:56:15.923408  754876 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1202 20:56:15.925124  754876 out.go:179] * Done! kubectl is now configured to use "no-preload-336331" cluster and "default" namespace by default
	I1202 20:56:15.457186  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:15.957341  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:16.456356  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:16.956786  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:17.457273  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:17.529683  761851 kubeadm.go:1114] duration metric: took 4.161789754s to wait for elevateKubeSystemPrivileges
	I1202 20:56:17.529733  761851 kubeadm.go:403] duration metric: took 16.449707403s to StartCluster
	I1202 20:56:17.529758  761851 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:17.529828  761851 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:56:17.531386  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:17.531613  761851 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 20:56:17.531617  761851 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:56:17.531699  761851 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:56:17.531801  761851 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-386191"
	I1202 20:56:17.531817  761851 config.go:182] Loaded profile config "embed-certs-386191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:56:17.531839  761851 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-386191"
	I1202 20:56:17.531817  761851 addons.go:70] Setting default-storageclass=true in profile "embed-certs-386191"
	I1202 20:56:17.531877  761851 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-386191"
	I1202 20:56:17.531882  761851 host.go:66] Checking if "embed-certs-386191" exists ...
	I1202 20:56:17.532342  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:56:17.532507  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:56:17.534531  761851 out.go:179] * Verifying Kubernetes components...
	I1202 20:56:17.535950  761851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:56:17.558800  761851 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:56:17.560025  761851 addons.go:239] Setting addon default-storageclass=true in "embed-certs-386191"
	I1202 20:56:17.560084  761851 host.go:66] Checking if "embed-certs-386191" exists ...
	I1202 20:56:17.560580  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:56:17.561225  761851 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:56:17.561246  761851 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:56:17.561324  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:56:17.590711  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:56:17.592956  761851 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:56:17.592992  761851 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:56:17.593051  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:56:17.617931  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:56:17.638614  761851 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 20:56:17.681673  761851 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:56:17.712144  761851 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:56:17.735866  761851 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:56:17.815035  761851 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1202 20:56:17.816483  761851 node_ready.go:35] waiting up to 6m0s for node "embed-certs-386191" to be "Ready" ...
	I1202 20:56:18.003767  761851 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1202 20:56:14.018515  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:16.020009  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:18.517905  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:56:18.004793  761851 addons.go:530] duration metric: took 473.08842ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1202 20:56:18.319554  761851 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-386191" context rescaled to 1 replicas
	W1202 20:56:19.820111  761851 node_ready.go:57] node "embed-certs-386191" has "Ready":"False" status (will retry)
	W1202 20:56:21.019501  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:23.518373  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:22.320036  761851 node_ready.go:57] node "embed-certs-386191" has "Ready":"False" status (will retry)
	W1202 20:56:24.320559  761851 node_ready.go:57] node "embed-certs-386191" has "Ready":"False" status (will retry)
	W1202 20:56:26.018767  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:28.019223  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:26.320730  761851 node_ready.go:57] node "embed-certs-386191" has "Ready":"False" status (will retry)
	W1202 20:56:28.820145  761851 node_ready.go:57] node "embed-certs-386191" has "Ready":"False" status (will retry)
	W1202 20:56:30.519140  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:56:32.019528  759377 pod_ready.go:94] pod "coredns-66bc5c9577-jrln7" is "Ready"
	I1202 20:56:32.019562  759377 pod_ready.go:86] duration metric: took 35.507552593s for pod "coredns-66bc5c9577-jrln7" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.022973  759377 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.027973  759377 pod_ready.go:94] pod "etcd-default-k8s-diff-port-997805" is "Ready"
	I1202 20:56:32.028009  759377 pod_ready.go:86] duration metric: took 5.002878ms for pod "etcd-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.030436  759377 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.035486  759377 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-997805" is "Ready"
	I1202 20:56:32.035517  759377 pod_ready.go:86] duration metric: took 5.054721ms for pod "kube-apiserver-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.038168  759377 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.216544  759377 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-997805" is "Ready"
	I1202 20:56:32.216573  759377 pod_ready.go:86] duration metric: took 178.377154ms for pod "kube-controller-manager-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.417009  759377 pod_ready.go:83] waiting for pod "kube-proxy-s2jpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.816568  759377 pod_ready.go:94] pod "kube-proxy-s2jpn" is "Ready"
	I1202 20:56:32.816591  759377 pod_ready.go:86] duration metric: took 399.551658ms for pod "kube-proxy-s2jpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:33.016734  759377 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:33.415885  759377 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-997805" is "Ready"
	I1202 20:56:33.415912  759377 pod_ready.go:86] duration metric: took 399.150299ms for pod "kube-scheduler-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:33.415928  759377 pod_ready.go:40] duration metric: took 36.908377916s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:56:33.462852  759377 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 20:56:33.464589  759377 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-997805" cluster and "default" namespace by default
	I1202 20:56:30.319943  761851 node_ready.go:49] node "embed-certs-386191" is "Ready"
	I1202 20:56:30.319978  761851 node_ready.go:38] duration metric: took 12.503459453s for node "embed-certs-386191" to be "Ready" ...
	I1202 20:56:30.319996  761851 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:56:30.320050  761851 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:56:30.333122  761851 api_server.go:72] duration metric: took 12.801460339s to wait for apiserver process to appear ...
	I1202 20:56:30.333155  761851 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:56:30.333181  761851 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 20:56:30.338949  761851 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1202 20:56:30.340352  761851 api_server.go:141] control plane version: v1.34.2
	I1202 20:56:30.340387  761851 api_server.go:131] duration metric: took 7.223849ms to wait for apiserver health ...
	I1202 20:56:30.340400  761851 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:56:30.345084  761851 system_pods.go:59] 8 kube-system pods found
	I1202 20:56:30.345142  761851 system_pods.go:61] "coredns-66bc5c9577-q6l9x" [e7159eb1-3cde-437a-99e3-760c9c397977] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:56:30.345152  761851 system_pods.go:61] "etcd-embed-certs-386191" [5ca26226-7c69-4d6b-a513-2187c089d96c] Running
	I1202 20:56:30.345160  761851 system_pods.go:61] "kindnet-x9jsh" [410369de-877d-46e5-8f7c-cd8076d1d2f5] Running
	I1202 20:56:30.345166  761851 system_pods.go:61] "kube-apiserver-embed-certs-386191" [dd2a96bf-6afe-43b8-b01d-a88496c7c9b9] Running
	I1202 20:56:30.345173  761851 system_pods.go:61] "kube-controller-manager-embed-certs-386191" [ba51ea44-285e-494e-a173-75f314cb6a5e] Running
	I1202 20:56:30.345178  761851 system_pods.go:61] "kube-proxy-854r8" [6c9652b0-217c-466f-9345-7364f0e39936] Running
	I1202 20:56:30.345185  761851 system_pods.go:61] "kube-scheduler-embed-certs-386191" [abe313cb-061b-4d65-b0e0-f369aacdc1c4] Running
	I1202 20:56:30.345195  761851 system_pods.go:61] "storage-provisioner" [d37e55bb-bb1f-4659-a9c5-14d47011bd23] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:56:30.345205  761851 system_pods.go:74] duration metric: took 4.796405ms to wait for pod list to return data ...
	I1202 20:56:30.345227  761851 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:56:30.348608  761851 default_sa.go:45] found service account: "default"
	I1202 20:56:30.348639  761851 default_sa.go:55] duration metric: took 3.40167ms for default service account to be created ...
	I1202 20:56:30.348652  761851 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 20:56:30.352973  761851 system_pods.go:86] 8 kube-system pods found
	I1202 20:56:30.353004  761851 system_pods.go:89] "coredns-66bc5c9577-q6l9x" [e7159eb1-3cde-437a-99e3-760c9c397977] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:56:30.353011  761851 system_pods.go:89] "etcd-embed-certs-386191" [5ca26226-7c69-4d6b-a513-2187c089d96c] Running
	I1202 20:56:30.353017  761851 system_pods.go:89] "kindnet-x9jsh" [410369de-877d-46e5-8f7c-cd8076d1d2f5] Running
	I1202 20:56:30.353021  761851 system_pods.go:89] "kube-apiserver-embed-certs-386191" [dd2a96bf-6afe-43b8-b01d-a88496c7c9b9] Running
	I1202 20:56:30.353025  761851 system_pods.go:89] "kube-controller-manager-embed-certs-386191" [ba51ea44-285e-494e-a173-75f314cb6a5e] Running
	I1202 20:56:30.353028  761851 system_pods.go:89] "kube-proxy-854r8" [6c9652b0-217c-466f-9345-7364f0e39936] Running
	I1202 20:56:30.353031  761851 system_pods.go:89] "kube-scheduler-embed-certs-386191" [abe313cb-061b-4d65-b0e0-f369aacdc1c4] Running
	I1202 20:56:30.353036  761851 system_pods.go:89] "storage-provisioner" [d37e55bb-bb1f-4659-a9c5-14d47011bd23] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:56:30.353064  761851 retry.go:31] will retry after 268.066085ms: missing components: kube-dns
	I1202 20:56:30.626568  761851 system_pods.go:86] 8 kube-system pods found
	I1202 20:56:30.626621  761851 system_pods.go:89] "coredns-66bc5c9577-q6l9x" [e7159eb1-3cde-437a-99e3-760c9c397977] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:56:30.626630  761851 system_pods.go:89] "etcd-embed-certs-386191" [5ca26226-7c69-4d6b-a513-2187c089d96c] Running
	I1202 20:56:30.626639  761851 system_pods.go:89] "kindnet-x9jsh" [410369de-877d-46e5-8f7c-cd8076d1d2f5] Running
	I1202 20:56:30.626645  761851 system_pods.go:89] "kube-apiserver-embed-certs-386191" [dd2a96bf-6afe-43b8-b01d-a88496c7c9b9] Running
	I1202 20:56:30.626656  761851 system_pods.go:89] "kube-controller-manager-embed-certs-386191" [ba51ea44-285e-494e-a173-75f314cb6a5e] Running
	I1202 20:56:30.626662  761851 system_pods.go:89] "kube-proxy-854r8" [6c9652b0-217c-466f-9345-7364f0e39936] Running
	I1202 20:56:30.626675  761851 system_pods.go:89] "kube-scheduler-embed-certs-386191" [abe313cb-061b-4d65-b0e0-f369aacdc1c4] Running
	I1202 20:56:30.626687  761851 system_pods.go:89] "storage-provisioner" [d37e55bb-bb1f-4659-a9c5-14d47011bd23] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:56:30.626708  761851 retry.go:31] will retry after 295.685816ms: missing components: kube-dns
	I1202 20:56:30.926543  761851 system_pods.go:86] 8 kube-system pods found
	I1202 20:56:30.926598  761851 system_pods.go:89] "coredns-66bc5c9577-q6l9x" [e7159eb1-3cde-437a-99e3-760c9c397977] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:56:30.926608  761851 system_pods.go:89] "etcd-embed-certs-386191" [5ca26226-7c69-4d6b-a513-2187c089d96c] Running
	I1202 20:56:30.926615  761851 system_pods.go:89] "kindnet-x9jsh" [410369de-877d-46e5-8f7c-cd8076d1d2f5] Running
	I1202 20:56:30.926621  761851 system_pods.go:89] "kube-apiserver-embed-certs-386191" [dd2a96bf-6afe-43b8-b01d-a88496c7c9b9] Running
	I1202 20:56:30.926628  761851 system_pods.go:89] "kube-controller-manager-embed-certs-386191" [ba51ea44-285e-494e-a173-75f314cb6a5e] Running
	I1202 20:56:30.926634  761851 system_pods.go:89] "kube-proxy-854r8" [6c9652b0-217c-466f-9345-7364f0e39936] Running
	I1202 20:56:30.926639  761851 system_pods.go:89] "kube-scheduler-embed-certs-386191" [abe313cb-061b-4d65-b0e0-f369aacdc1c4] Running
	I1202 20:56:30.926646  761851 system_pods.go:89] "storage-provisioner" [d37e55bb-bb1f-4659-a9c5-14d47011bd23] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:56:30.926671  761851 retry.go:31] will retry after 481.864787ms: missing components: kube-dns
	I1202 20:56:31.413061  761851 system_pods.go:86] 8 kube-system pods found
	I1202 20:56:31.413118  761851 system_pods.go:89] "coredns-66bc5c9577-q6l9x" [e7159eb1-3cde-437a-99e3-760c9c397977] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:56:31.413126  761851 system_pods.go:89] "etcd-embed-certs-386191" [5ca26226-7c69-4d6b-a513-2187c089d96c] Running
	I1202 20:56:31.413131  761851 system_pods.go:89] "kindnet-x9jsh" [410369de-877d-46e5-8f7c-cd8076d1d2f5] Running
	I1202 20:56:31.413134  761851 system_pods.go:89] "kube-apiserver-embed-certs-386191" [dd2a96bf-6afe-43b8-b01d-a88496c7c9b9] Running
	I1202 20:56:31.413141  761851 system_pods.go:89] "kube-controller-manager-embed-certs-386191" [ba51ea44-285e-494e-a173-75f314cb6a5e] Running
	I1202 20:56:31.413146  761851 system_pods.go:89] "kube-proxy-854r8" [6c9652b0-217c-466f-9345-7364f0e39936] Running
	I1202 20:56:31.413151  761851 system_pods.go:89] "kube-scheduler-embed-certs-386191" [abe313cb-061b-4d65-b0e0-f369aacdc1c4] Running
	I1202 20:56:31.413158  761851 system_pods.go:89] "storage-provisioner" [d37e55bb-bb1f-4659-a9c5-14d47011bd23] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:56:31.413178  761851 retry.go:31] will retry after 524.282357ms: missing components: kube-dns
	I1202 20:56:31.942153  761851 system_pods.go:86] 8 kube-system pods found
	I1202 20:56:31.942180  761851 system_pods.go:89] "coredns-66bc5c9577-q6l9x" [e7159eb1-3cde-437a-99e3-760c9c397977] Running
	I1202 20:56:31.942185  761851 system_pods.go:89] "etcd-embed-certs-386191" [5ca26226-7c69-4d6b-a513-2187c089d96c] Running
	I1202 20:56:31.942189  761851 system_pods.go:89] "kindnet-x9jsh" [410369de-877d-46e5-8f7c-cd8076d1d2f5] Running
	I1202 20:56:31.942192  761851 system_pods.go:89] "kube-apiserver-embed-certs-386191" [dd2a96bf-6afe-43b8-b01d-a88496c7c9b9] Running
	I1202 20:56:31.942196  761851 system_pods.go:89] "kube-controller-manager-embed-certs-386191" [ba51ea44-285e-494e-a173-75f314cb6a5e] Running
	I1202 20:56:31.942199  761851 system_pods.go:89] "kube-proxy-854r8" [6c9652b0-217c-466f-9345-7364f0e39936] Running
	I1202 20:56:31.942202  761851 system_pods.go:89] "kube-scheduler-embed-certs-386191" [abe313cb-061b-4d65-b0e0-f369aacdc1c4] Running
	I1202 20:56:31.942205  761851 system_pods.go:89] "storage-provisioner" [d37e55bb-bb1f-4659-a9c5-14d47011bd23] Running
	I1202 20:56:31.942212  761851 system_pods.go:126] duration metric: took 1.593529924s to wait for k8s-apps to be running ...
	I1202 20:56:31.942219  761851 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 20:56:31.942261  761851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:56:31.955055  761851 system_svc.go:56] duration metric: took 12.827769ms WaitForService to wait for kubelet
	I1202 20:56:31.955097  761851 kubeadm.go:587] duration metric: took 14.423443169s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:56:31.955121  761851 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:56:31.958210  761851 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 20:56:31.958249  761851 node_conditions.go:123] node cpu capacity is 8
	I1202 20:56:31.958265  761851 node_conditions.go:105] duration metric: took 3.138976ms to run NodePressure ...
	I1202 20:56:31.958278  761851 start.go:242] waiting for startup goroutines ...
	I1202 20:56:31.958285  761851 start.go:247] waiting for cluster config update ...
	I1202 20:56:31.958296  761851 start.go:256] writing updated cluster config ...
	I1202 20:56:31.958597  761851 ssh_runner.go:195] Run: rm -f paused
	I1202 20:56:31.962581  761851 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:56:31.966130  761851 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q6l9x" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:31.971173  761851 pod_ready.go:94] pod "coredns-66bc5c9577-q6l9x" is "Ready"
	I1202 20:56:31.971201  761851 pod_ready.go:86] duration metric: took 5.04828ms for pod "coredns-66bc5c9577-q6l9x" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:31.973411  761851 pod_ready.go:83] waiting for pod "etcd-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:31.978228  761851 pod_ready.go:94] pod "etcd-embed-certs-386191" is "Ready"
	I1202 20:56:31.978263  761851 pod_ready.go:86] duration metric: took 4.826356ms for pod "etcd-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:31.980684  761851 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:31.984771  761851 pod_ready.go:94] pod "kube-apiserver-embed-certs-386191" is "Ready"
	I1202 20:56:31.984803  761851 pod_ready.go:86] duration metric: took 4.09504ms for pod "kube-apiserver-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:31.986878  761851 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.367606  761851 pod_ready.go:94] pod "kube-controller-manager-embed-certs-386191" is "Ready"
	I1202 20:56:32.367637  761851 pod_ready.go:86] duration metric: took 380.737416ms for pod "kube-controller-manager-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.567519  761851 pod_ready.go:83] waiting for pod "kube-proxy-854r8" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.967144  761851 pod_ready.go:94] pod "kube-proxy-854r8" is "Ready"
	I1202 20:56:32.967177  761851 pod_ready.go:86] duration metric: took 399.625971ms for pod "kube-proxy-854r8" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:33.168115  761851 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:33.566983  761851 pod_ready.go:94] pod "kube-scheduler-embed-certs-386191" is "Ready"
	I1202 20:56:33.567015  761851 pod_ready.go:86] duration metric: took 398.86856ms for pod "kube-scheduler-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:33.567030  761851 pod_ready.go:40] duration metric: took 1.604412945s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:56:33.625323  761851 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 20:56:33.627128  761851 out.go:179] * Done! kubectl is now configured to use "embed-certs-386191" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 20:56:16 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:16.720883621Z" level=info msg="Started container" PID=1755 containerID=8b9571fc1afb59ffda70959998b9386a8cc1a412c773117671bd059b0c151419 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vhp59/dashboard-metrics-scraper id=7068fee1-e2f8-4bed-b392-9e04e9b48792 name=/runtime.v1.RuntimeService/StartContainer sandboxID=82b8464121953a993bc43eb6fe67912f54b3283ad0ce74e3a1bd67f67c091d49
	Dec 02 20:56:16 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:16.860332035Z" level=info msg="Removing container: 200b961bc8b01d2d50a50e095ea2056aa5e2e23febb2edfacc81d4ddfb956fc0" id=c7642385-e74a-4a35-be4b-a35c75aad6a1 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 20:56:16 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:16.87428434Z" level=info msg="Removed container 200b961bc8b01d2d50a50e095ea2056aa5e2e23febb2edfacc81d4ddfb956fc0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vhp59/dashboard-metrics-scraper" id=c7642385-e74a-4a35-be4b-a35c75aad6a1 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 20:56:25 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:25.885227609Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=41ead202-ee56-4fee-b2c6-a899c09bc22c name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:56:25 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:25.886279227Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b0431df3-3ffe-44fc-b59e-ab034a5e82cb name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:56:25 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:25.887367804Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=c102ae40-1aee-41e3-a464-6dfdcd001b40 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:56:25 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:25.887527059Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:25 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:25.892606016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:25 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:25.892825228Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d561facd3bc5b213f169448a0b25db351a0e272a62053c61991d04124aa2333b/merged/etc/passwd: no such file or directory"
	Dec 02 20:56:25 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:25.892854415Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d561facd3bc5b213f169448a0b25db351a0e272a62053c61991d04124aa2333b/merged/etc/group: no such file or directory"
	Dec 02 20:56:25 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:25.893795321Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:25 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:25.930114881Z" level=info msg="Created container 35e720802a1bf3bbed62adc89a0f19dce7a67de2db637573eb1894ab9ebb8f24: kube-system/storage-provisioner/storage-provisioner" id=c102ae40-1aee-41e3-a464-6dfdcd001b40 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:56:25 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:25.930756886Z" level=info msg="Starting container: 35e720802a1bf3bbed62adc89a0f19dce7a67de2db637573eb1894ab9ebb8f24" id=edbb469e-ee1b-441b-9c79-1b0f4f4df2e7 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:56:25 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:25.93256201Z" level=info msg="Started container" PID=1772 containerID=35e720802a1bf3bbed62adc89a0f19dce7a67de2db637573eb1894ab9ebb8f24 description=kube-system/storage-provisioner/storage-provisioner id=edbb469e-ee1b-441b-9c79-1b0f4f4df2e7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=eba18f9d9797bf1e231fb0774d0cc55e6bc3bc97ed16f2daa02c5add6153e22d
	Dec 02 20:56:36 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:36.753882942Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=29d64113-d741-484e-ae44-a0f1e042da40 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:56:36 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:36.754897124Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=dfbd5b69-be85-44be-838e-6618a9d7728a name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:56:36 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:36.756127139Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vhp59/dashboard-metrics-scraper" id=70be0df9-c03b-45ff-be1c-f58e610a608d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:56:36 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:36.756265122Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:36 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:36.765090609Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:36 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:36.768588064Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:36 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:36.802729213Z" level=info msg="Created container c9080db2b6daf76ef63b2b59e74d0239edbb838d08547298dd4502c7c3b4d9f4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vhp59/dashboard-metrics-scraper" id=70be0df9-c03b-45ff-be1c-f58e610a608d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:56:36 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:36.803662467Z" level=info msg="Starting container: c9080db2b6daf76ef63b2b59e74d0239edbb838d08547298dd4502c7c3b4d9f4" id=c866f162-c4f5-41df-8d94-e865500f2435 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:56:36 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:36.806157915Z" level=info msg="Started container" PID=1809 containerID=c9080db2b6daf76ef63b2b59e74d0239edbb838d08547298dd4502c7c3b4d9f4 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vhp59/dashboard-metrics-scraper id=c866f162-c4f5-41df-8d94-e865500f2435 name=/runtime.v1.RuntimeService/StartContainer sandboxID=82b8464121953a993bc43eb6fe67912f54b3283ad0ce74e3a1bd67f67c091d49
	Dec 02 20:56:36 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:36.91806396Z" level=info msg="Removing container: 8b9571fc1afb59ffda70959998b9386a8cc1a412c773117671bd059b0c151419" id=b8378f1d-1888-4295-b94a-10e6938c2590 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 20:56:36 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:36.928175406Z" level=info msg="Removed container 8b9571fc1afb59ffda70959998b9386a8cc1a412c773117671bd059b0c151419: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vhp59/dashboard-metrics-scraper" id=b8378f1d-1888-4295-b94a-10e6938c2590 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	c9080db2b6daf       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   3                   82b8464121953       dashboard-metrics-scraper-6ffb444bf9-vhp59             kubernetes-dashboard
	35e720802a1bf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   eba18f9d9797b       storage-provisioner                                    kube-system
	f7c1779df921d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   26e44926de7f4       kubernetes-dashboard-855c9754f9-jz8xk                  kubernetes-dashboard
	08d5150fce081       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   c44964ac4c8c3       busybox                                                default
	f06d54a2384df       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   0b1607b008992       kindnet-rzqpn                                          kube-system
	1e15bb4007b6f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   eba18f9d9797b       storage-provisioner                                    kube-system
	5ad0a1655ba23       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           53 seconds ago      Running             kube-proxy                  0                   f11c81c57060d       kube-proxy-s2jpn                                       kube-system
	fc477a72b7656       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   1745c8b86e040       coredns-66bc5c9577-jrln7                               kube-system
	25e14e8feafb6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           56 seconds ago      Running             etcd                        0                   97509908f5a98       etcd-default-k8s-diff-port-997805                      kube-system
	0c7e2844e2dbd       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           56 seconds ago      Running             kube-scheduler              0                   4d0207ec1741b       kube-scheduler-default-k8s-diff-port-997805            kube-system
	81b0ec87511a0       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           56 seconds ago      Running             kube-apiserver              0                   f5fdfcd5991e8       kube-apiserver-default-k8s-diff-port-997805            kube-system
	e13e6c4d6c5da       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           56 seconds ago      Running             kube-controller-manager     0                   13288f31fdebc       kube-controller-manager-default-k8s-diff-port-997805   kube-system
	
	
	==> coredns [fc477a72b765693b81689208ff42b491035d31c49ea6b43c64099d495e7cec00] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45135 - 2690 "HINFO IN 5587080186042255362.1680565545141175739. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026839424s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-997805
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-997805
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=default-k8s-diff-port-997805
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T20_54_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 20:54:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-997805
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:56:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:56:25 +0000   Tue, 02 Dec 2025 20:54:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:56:25 +0000   Tue, 02 Dec 2025 20:54:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:56:25 +0000   Tue, 02 Dec 2025 20:54:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:56:25 +0000   Tue, 02 Dec 2025 20:55:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-997805
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                4d0fe763-c364-4b9d-a9b2-5ea428409eed
	  Boot ID:                    9dd5456f-e394-4b4b-9458-48d26faf507e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-jrln7                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-default-k8s-diff-port-997805                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-rzqpn                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-default-k8s-diff-port-997805             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-997805    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-s2jpn                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-default-k8s-diff-port-997805             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vhp59              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-jz8xk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  112s               kubelet          Node default-k8s-diff-port-997805 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s               kubelet          Node default-k8s-diff-port-997805 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s               kubelet          Node default-k8s-diff-port-997805 status is now: NodeHasSufficientPID
	  Normal  Starting                 112s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s               node-controller  Node default-k8s-diff-port-997805 event: Registered Node default-k8s-diff-port-997805 in Controller
	  Normal  NodeReady                96s                kubelet          Node default-k8s-diff-port-997805 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node default-k8s-diff-port-997805 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node default-k8s-diff-port-997805 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node default-k8s-diff-port-997805 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node default-k8s-diff-port-997805 event: Registered Node default-k8s-diff-port-997805 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[ +32.254238] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[Dec 2 20:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 03 bd 14 45 8a 08 06
	[  +0.000590] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 96 27 ad 0d 40 04 08 06
	[Dec 2 20:53] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	[  +0.000700] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 e4 ba c0 78 5f 08 06
	[ +10.119645] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000022] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[  +2.447166] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 df 09 53 d6 6e 08 06
	[  +0.000374] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 8d 06 71 0a 5e 08 06
	[Dec 2 20:54] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 12 47 13 50 f6 bc 08 06
	[  +0.001523] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[ +22.123549] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 0d 45 06 42 2a 08 06
	[  +0.000389] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	
	
	==> etcd [25e14e8feafb6c0d6c5261cd5e507b812e39fcb9c7e196408fe69d780ebbcd1d] <==
	{"level":"info","ts":"2025-12-02T20:55:55.078604Z","caller":"traceutil/trace.go:172","msg":"trace[1838275369] transaction","detail":"{read_only:false; response_revision:448; number_of_response:1; }","duration":"197.582016ms","start":"2025-12-02T20:55:54.880993Z","end":"2025-12-02T20:55:55.078575Z","steps":["trace[1838275369] 'process raft request'  (duration: 197.356077ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T20:55:55.307371Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"185.008178ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-997805\" limit:1 ","response":"range_response_count:1 size:7752"}
	{"level":"info","ts":"2025-12-02T20:55:55.307454Z","caller":"traceutil/trace.go:172","msg":"trace[2066049479] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-997805; range_end:; response_count:1; response_revision:449; }","duration":"185.091265ms","start":"2025-12-02T20:55:55.122337Z","end":"2025-12-02T20:55:55.307428Z","steps":["trace[2066049479] 'agreement among raft nodes before linearized reading'  (duration: 73.80089ms)","trace[2066049479] 'range keys from in-memory index tree'  (duration: 111.086328ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T20:55:55.307396Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.178158ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597461077860260 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/kubernetes-dashboard\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/kubernetes-dashboard\" value_size:1249 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-02T20:55:55.307644Z","caller":"traceutil/trace.go:172","msg":"trace[2033977389] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"223.353051ms","start":"2025-12-02T20:55:55.084277Z","end":"2025-12-02T20:55:55.307630Z","steps":["trace[2033977389] 'process raft request'  (duration: 111.897184ms)","trace[2033977389] 'compare'  (duration: 111.043792ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T20:55:55.307674Z","caller":"traceutil/trace.go:172","msg":"trace[1365941716] linearizableReadLoop","detail":"{readStateIndex:479; appliedIndex:477; }","duration":"111.546747ms","start":"2025-12-02T20:55:55.196114Z","end":"2025-12-02T20:55:55.307660Z","steps":["trace[1365941716] 'read index received'  (duration: 65.363µs)","trace[1365941716] 'applied index is now lower than readState.Index'  (duration: 111.480782ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T20:55:55.307688Z","caller":"traceutil/trace.go:172","msg":"trace[1298224716] transaction","detail":"{read_only:false; response_revision:451; number_of_response:1; }","duration":"218.987752ms","start":"2025-12-02T20:55:55.088688Z","end":"2025-12-02T20:55:55.307676Z","steps":["trace[1298224716] 'process raft request'  (duration: 218.87919ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T20:55:55.307778Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.496509ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T20:55:55.307930Z","caller":"traceutil/trace.go:172","msg":"trace[1191525244] range","detail":"{range_begin:/registry/clusterroles; range_end:; response_count:0; response_revision:451; }","duration":"119.651279ms","start":"2025-12-02T20:55:55.188267Z","end":"2025-12-02T20:55:55.307918Z","steps":["trace[1191525244] 'agreement among raft nodes before linearized reading'  (duration: 119.474214ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T20:55:55.307811Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.39798ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" limit:1 ","response":"range_response_count:1 size:442"}
	{"level":"info","ts":"2025-12-02T20:55:55.308006Z","caller":"traceutil/trace.go:172","msg":"trace[1997034449] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:1; response_revision:451; }","duration":"119.59132ms","start":"2025-12-02T20:55:55.188402Z","end":"2025-12-02T20:55:55.307993Z","steps":["trace[1997034449] 'agreement among raft nodes before linearized reading'  (duration: 119.316734ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T20:55:55.450852Z","caller":"traceutil/trace.go:172","msg":"trace[1252777728] linearizableReadLoop","detail":"{readStateIndex:480; appliedIndex:480; }","duration":"121.966061ms","start":"2025-12-02T20:55:55.328862Z","end":"2025-12-02T20:55:55.450828Z","steps":["trace[1252777728] 'read index received'  (duration: 121.935676ms)","trace[1252777728] 'applied index is now lower than readState.Index'  (duration: 6.33µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T20:55:55.511662Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.77594ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T20:55:55.511732Z","caller":"traceutil/trace.go:172","msg":"trace[1826643080] range","detail":"{range_begin:/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper; range_end:; response_count:0; response_revision:452; }","duration":"182.859148ms","start":"2025-12-02T20:55:55.328857Z","end":"2025-12-02T20:55:55.511716Z","steps":["trace[1826643080] 'agreement among raft nodes before linearized reading'  (duration: 122.076349ms)","trace[1826643080] 'range keys from in-memory index tree'  (duration: 60.663926ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T20:55:55.511875Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.973607ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-view\" limit:1 ","response":"range_response_count:1 size:2030"}
	{"level":"info","ts":"2025-12-02T20:55:55.511921Z","caller":"traceutil/trace.go:172","msg":"trace[1019459205] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-view; range_end:; response_count:1; response_revision:453; }","duration":"183.028131ms","start":"2025-12-02T20:55:55.328881Z","end":"2025-12-02T20:55:55.511909Z","steps":["trace[1019459205] 'agreement among raft nodes before linearized reading'  (duration: 182.887632ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T20:55:55.512085Z","caller":"traceutil/trace.go:172","msg":"trace[9690257] transaction","detail":"{read_only:false; response_revision:453; number_of_response:1; }","duration":"193.069895ms","start":"2025-12-02T20:55:55.318990Z","end":"2025-12-02T20:55:55.512060Z","steps":["trace[9690257] 'process raft request'  (duration: 131.915422ms)","trace[9690257] 'compare'  (duration: 60.739957ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T20:55:55.512203Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.792491ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" limit:1 ","response":"range_response_count:1 size:4336"}
	{"level":"info","ts":"2025-12-02T20:55:55.512238Z","caller":"traceutil/trace.go:172","msg":"trace[851423930] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:453; }","duration":"109.830953ms","start":"2025-12-02T20:55:55.402400Z","end":"2025-12-02T20:55:55.512231Z","steps":["trace[851423930] 'agreement among raft nodes before linearized reading'  (duration: 109.699387ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T20:55:55.849927Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.060314ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox\" limit:1 ","response":"range_response_count:1 size:2812"}
	{"level":"warn","ts":"2025-12-02T20:55:55.849988Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.36545ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-view\" limit:1 ","response":"range_response_count:1 size:2030"}
	{"level":"info","ts":"2025-12-02T20:55:55.850042Z","caller":"traceutil/trace.go:172","msg":"trace[1668444583] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-view; range_end:; response_count:1; response_revision:459; }","duration":"119.415386ms","start":"2025-12-02T20:55:55.730608Z","end":"2025-12-02T20:55:55.850023Z","steps":["trace[1668444583] 'range keys from in-memory index tree'  (duration: 119.238252ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T20:55:55.850042Z","caller":"traceutil/trace.go:172","msg":"trace[171357317] range","detail":"{range_begin:/registry/pods/default/busybox; range_end:; response_count:1; response_revision:459; }","duration":"117.174475ms","start":"2025-12-02T20:55:55.732836Z","end":"2025-12-02T20:55:55.850010Z","steps":["trace[171357317] 'range keys from in-memory index tree'  (duration: 116.812214ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T20:55:55.849949Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.796405ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:897"}
	{"level":"info","ts":"2025-12-02T20:55:55.850286Z","caller":"traceutil/trace.go:172","msg":"trace[830423202] range","detail":"{range_begin:/registry/namespaces/kubernetes-dashboard; range_end:; response_count:1; response_revision:459; }","duration":"120.140931ms","start":"2025-12-02T20:55:55.730118Z","end":"2025-12-02T20:55:55.850259Z","steps":["trace[830423202] 'range keys from in-memory index tree'  (duration: 119.657189ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:56:48 up  2:39,  0 user,  load average: 3.60, 3.97, 2.71
	Linux default-k8s-diff-port-997805 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f06d54a2384df567756e9be0cfb30d79b223d7ca905c4709c051828f8e793c87] <==
	I1202 20:55:56.148352       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 20:55:56.148615       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1202 20:55:56.148837       1 main.go:148] setting mtu 1500 for CNI 
	I1202 20:55:56.148859       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 20:55:56.148881       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T20:55:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 20:55:56.445432       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 20:55:56.445471       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 20:55:56.445642       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 20:55:56.445704       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 20:55:56.846508       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 20:55:56.846539       1 metrics.go:72] Registering metrics
	I1202 20:55:56.846593       1 controller.go:711] "Syncing nftables rules"
	I1202 20:56:06.364682       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 20:56:06.364786       1 main.go:301] handling current node
	I1202 20:56:16.370473       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 20:56:16.370520       1 main.go:301] handling current node
	I1202 20:56:26.364799       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 20:56:26.364839       1 main.go:301] handling current node
	I1202 20:56:36.368209       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 20:56:36.368261       1 main.go:301] handling current node
	I1202 20:56:46.372795       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 20:56:46.372831       1 main.go:301] handling current node
	
	
	==> kube-apiserver [81b0ec87511a05a7501d98eb27c52f69372a4b30c4ea523db262c140f9b68cd3] <==
	I1202 20:55:54.282858       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1202 20:55:54.282924       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1202 20:55:54.283623       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1202 20:55:54.283720       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1202 20:55:54.283672       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1202 20:55:54.290235       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1202 20:55:54.292022       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1202 20:55:54.300590       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1202 20:55:54.306353       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1202 20:55:54.306425       1 policy_source.go:240] refreshing policies
	I1202 20:55:54.317993       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1202 20:55:54.699437       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1202 20:55:54.854730       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 20:55:54.858526       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 20:55:55.311202       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 20:55:55.524913       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 20:55:55.641060       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 20:55:55.853433       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 20:55:55.937240       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.24.230"}
	I1202 20:55:55.949300       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.154.227"}
	I1202 20:55:57.733382       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 20:55:57.733430       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 20:55:57.935991       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 20:55:58.233356       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 20:55:58.233356       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [e13e6c4d6c5da602ac2e1402a7612205c5a0ceffdccf7618da3035e562a7d9d3] <==
	I1202 20:55:57.609246       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1202 20:55:57.609380       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-997805"
	I1202 20:55:57.609442       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1202 20:55:57.623803       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1202 20:55:57.627300       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1202 20:55:57.629239       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1202 20:55:57.629244       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 20:55:57.630500       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 20:55:57.630567       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 20:55:57.630577       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1202 20:55:57.630583       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 20:55:57.630592       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 20:55:57.630569       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1202 20:55:57.630685       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1202 20:55:57.631283       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1202 20:55:57.631722       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1202 20:55:57.631754       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1202 20:55:57.634299       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1202 20:55:57.634483       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1202 20:55:57.636346       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 20:55:57.637436       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1202 20:55:57.637476       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1202 20:55:57.639755       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1202 20:55:57.640943       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1202 20:55:57.660495       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [5ad0a1655ba23d5613d29f48e14efa7b904937342c2b4f154af87389ad6ae5a9] <==
	I1202 20:55:55.693952       1 server_linux.go:53] "Using iptables proxy"
	I1202 20:55:55.760974       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 20:55:55.861813       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 20:55:55.861858       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1202 20:55:55.861980       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 20:55:55.892575       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 20:55:55.892644       1 server_linux.go:132] "Using iptables Proxier"
	I1202 20:55:55.901965       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 20:55:55.902499       1 server.go:527] "Version info" version="v1.34.2"
	I1202 20:55:55.902791       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:55:55.907109       1 config.go:200] "Starting service config controller"
	I1202 20:55:55.907258       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 20:55:55.907470       1 config.go:106] "Starting endpoint slice config controller"
	I1202 20:55:55.907527       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 20:55:55.907679       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 20:55:55.907715       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 20:55:55.909145       1 config.go:309] "Starting node config controller"
	I1202 20:55:55.909200       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 20:55:55.909210       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 20:55:56.008163       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 20:55:56.008192       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 20:55:56.008198       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [0c7e2844e2dbdbf5b9ffe8bf4e8d07304b64b059e3d4c965c2010c5d8a39c499] <==
	I1202 20:55:52.891386       1 serving.go:386] Generated self-signed cert in-memory
	W1202 20:55:54.215137       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 20:55:54.215174       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W1202 20:55:54.215189       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 20:55:54.215198       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 20:55:54.236291       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1202 20:55:54.236318       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:55:54.238876       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:55:54.238913       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:55:54.239292       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 20:55:54.239755       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 20:55:54.340641       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 20:55:58 default-k8s-diff-port-997805 kubelet[726]: I1202 20:55:58.144611     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2mfc\" (UniqueName: \"kubernetes.io/projected/cbfcab3a-34f4-49e3-b330-2077b65e6a48-kube-api-access-c2mfc\") pod \"kubernetes-dashboard-855c9754f9-jz8xk\" (UID: \"cbfcab3a-34f4-49e3-b330-2077b65e6a48\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jz8xk"
	Dec 02 20:55:58 default-k8s-diff-port-997805 kubelet[726]: I1202 20:55:58.144712     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhxp8\" (UniqueName: \"kubernetes.io/projected/cc5c7477-3af9-4955-a7b0-94a907898050-kube-api-access-bhxp8\") pod \"dashboard-metrics-scraper-6ffb444bf9-vhp59\" (UID: \"cc5c7477-3af9-4955-a7b0-94a907898050\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vhp59"
	Dec 02 20:56:01 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:01.654511     726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 02 20:56:04 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:04.490246     726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jz8xk" podStartSLOduration=2.356286059 podStartE2EDuration="6.490223678s" podCreationTimestamp="2025-12-02 20:55:58 +0000 UTC" firstStartedPulling="2025-12-02 20:55:58.399480607 +0000 UTC m=+6.778320079" lastFinishedPulling="2025-12-02 20:56:02.533418221 +0000 UTC m=+10.912257698" observedRunningTime="2025-12-02 20:56:02.865871705 +0000 UTC m=+11.244711195" watchObservedRunningTime="2025-12-02 20:56:04.490223678 +0000 UTC m=+12.869063168"
	Dec 02 20:56:05 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:05.822624     726 scope.go:117] "RemoveContainer" containerID="51f3f00f170a758498b59c1187991a56865c24b44f57f6cfc0c511400ad68660"
	Dec 02 20:56:06 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:06.826958     726 scope.go:117] "RemoveContainer" containerID="51f3f00f170a758498b59c1187991a56865c24b44f57f6cfc0c511400ad68660"
	Dec 02 20:56:06 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:06.827294     726 scope.go:117] "RemoveContainer" containerID="200b961bc8b01d2d50a50e095ea2056aa5e2e23febb2edfacc81d4ddfb956fc0"
	Dec 02 20:56:06 default-k8s-diff-port-997805 kubelet[726]: E1202 20:56:06.827471     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vhp59_kubernetes-dashboard(cc5c7477-3af9-4955-a7b0-94a907898050)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vhp59" podUID="cc5c7477-3af9-4955-a7b0-94a907898050"
	Dec 02 20:56:07 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:07.832199     726 scope.go:117] "RemoveContainer" containerID="200b961bc8b01d2d50a50e095ea2056aa5e2e23febb2edfacc81d4ddfb956fc0"
	Dec 02 20:56:07 default-k8s-diff-port-997805 kubelet[726]: E1202 20:56:07.832454     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vhp59_kubernetes-dashboard(cc5c7477-3af9-4955-a7b0-94a907898050)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vhp59" podUID="cc5c7477-3af9-4955-a7b0-94a907898050"
	Dec 02 20:56:16 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:16.673192     726 scope.go:117] "RemoveContainer" containerID="200b961bc8b01d2d50a50e095ea2056aa5e2e23febb2edfacc81d4ddfb956fc0"
	Dec 02 20:56:16 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:16.858777     726 scope.go:117] "RemoveContainer" containerID="200b961bc8b01d2d50a50e095ea2056aa5e2e23febb2edfacc81d4ddfb956fc0"
	Dec 02 20:56:16 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:16.859092     726 scope.go:117] "RemoveContainer" containerID="8b9571fc1afb59ffda70959998b9386a8cc1a412c773117671bd059b0c151419"
	Dec 02 20:56:16 default-k8s-diff-port-997805 kubelet[726]: E1202 20:56:16.859336     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vhp59_kubernetes-dashboard(cc5c7477-3af9-4955-a7b0-94a907898050)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vhp59" podUID="cc5c7477-3af9-4955-a7b0-94a907898050"
	Dec 02 20:56:25 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:25.884733     726 scope.go:117] "RemoveContainer" containerID="1e15bb4007b6f6ac5c5aba376e81233c28da69653a99ea88226c07cfeee8a9a7"
	Dec 02 20:56:26 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:26.672950     726 scope.go:117] "RemoveContainer" containerID="8b9571fc1afb59ffda70959998b9386a8cc1a412c773117671bd059b0c151419"
	Dec 02 20:56:26 default-k8s-diff-port-997805 kubelet[726]: E1202 20:56:26.673277     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vhp59_kubernetes-dashboard(cc5c7477-3af9-4955-a7b0-94a907898050)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vhp59" podUID="cc5c7477-3af9-4955-a7b0-94a907898050"
	Dec 02 20:56:36 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:36.752920     726 scope.go:117] "RemoveContainer" containerID="8b9571fc1afb59ffda70959998b9386a8cc1a412c773117671bd059b0c151419"
	Dec 02 20:56:36 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:36.916527     726 scope.go:117] "RemoveContainer" containerID="8b9571fc1afb59ffda70959998b9386a8cc1a412c773117671bd059b0c151419"
	Dec 02 20:56:36 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:36.916749     726 scope.go:117] "RemoveContainer" containerID="c9080db2b6daf76ef63b2b59e74d0239edbb838d08547298dd4502c7c3b4d9f4"
	Dec 02 20:56:36 default-k8s-diff-port-997805 kubelet[726]: E1202 20:56:36.917061     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vhp59_kubernetes-dashboard(cc5c7477-3af9-4955-a7b0-94a907898050)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vhp59" podUID="cc5c7477-3af9-4955-a7b0-94a907898050"
	Dec 02 20:56:45 default-k8s-diff-port-997805 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 20:56:45 default-k8s-diff-port-997805 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 20:56:45 default-k8s-diff-port-997805 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 20:56:45 default-k8s-diff-port-997805 systemd[1]: kubelet.service: Consumed 1.945s CPU time.
	
	
	==> kubernetes-dashboard [f7c1779df921dc77252b05de7b4552d502a7c9e38f020d197cbdfd6540d6213a] <==
	2025/12/02 20:56:02 Starting overwatch
	2025/12/02 20:56:02 Using namespace: kubernetes-dashboard
	2025/12/02 20:56:02 Using in-cluster config to connect to apiserver
	2025/12/02 20:56:02 Using secret token for csrf signing
	2025/12/02 20:56:02 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/02 20:56:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/02 20:56:02 Successful initial request to the apiserver, version: v1.34.2
	2025/12/02 20:56:02 Generating JWE encryption key
	2025/12/02 20:56:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/02 20:56:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/02 20:56:02 Initializing JWE encryption key from synchronized object
	2025/12/02 20:56:02 Creating in-cluster Sidecar client
	2025/12/02 20:56:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 20:56:02 Serving insecurely on HTTP port: 9090
	2025/12/02 20:56:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1e15bb4007b6f6ac5c5aba376e81233c28da69653a99ea88226c07cfeee8a9a7] <==
	I1202 20:55:55.745333       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1202 20:56:25.748002       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [35e720802a1bf3bbed62adc89a0f19dce7a67de2db637573eb1894ab9ebb8f24] <==
	I1202 20:56:25.945665       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 20:56:25.953372       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 20:56:25.953416       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1202 20:56:25.955574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:29.411827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:33.672384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:37.271276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:40.325457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:43.348084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:43.354422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 20:56:43.354637       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 20:56:43.354820       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-997805_6cedf7ef-97cd-4056-9921-7b63b41ee2ed!
	I1202 20:56:43.354776       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"630e2ab7-c763-4f65-86eb-788c49314bcc", APIVersion:"v1", ResourceVersion:"638", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-997805_6cedf7ef-97cd-4056-9921-7b63b41ee2ed became leader
	W1202 20:56:43.357786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:43.361997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 20:56:43.455877       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-997805_6cedf7ef-97cd-4056-9921-7b63b41ee2ed!
	W1202 20:56:45.365991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:45.371461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:47.375843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:47.382502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-997805 -n default-k8s-diff-port-997805
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-997805 -n default-k8s-diff-port-997805: exit status 2 (336.603831ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-997805 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-997805
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-997805:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c25b25f1d6428e1d0325a0468af68a3a621d93f89c284fc809071a0c5b3636f1",
	        "Created": "2025-12-02T20:54:37.048348832Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 759767,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T20:55:43.909243691Z",
	            "FinishedAt": "2025-12-02T20:55:42.856980855Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/c25b25f1d6428e1d0325a0468af68a3a621d93f89c284fc809071a0c5b3636f1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c25b25f1d6428e1d0325a0468af68a3a621d93f89c284fc809071a0c5b3636f1/hostname",
	        "HostsPath": "/var/lib/docker/containers/c25b25f1d6428e1d0325a0468af68a3a621d93f89c284fc809071a0c5b3636f1/hosts",
	        "LogPath": "/var/lib/docker/containers/c25b25f1d6428e1d0325a0468af68a3a621d93f89c284fc809071a0c5b3636f1/c25b25f1d6428e1d0325a0468af68a3a621d93f89c284fc809071a0c5b3636f1-json.log",
	        "Name": "/default-k8s-diff-port-997805",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-997805:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-997805",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c25b25f1d6428e1d0325a0468af68a3a621d93f89c284fc809071a0c5b3636f1",
	                "LowerDir": "/var/lib/docker/overlay2/438615afda3ee0db74f277419380adcb83f92340686904c8b7104d5c82409f9b-init/diff:/var/lib/docker/overlay2/49bb44e987885c48600d5ae7ad3c81cad82a00f38070c0460882c5746d9fae59/diff",
	                "MergedDir": "/var/lib/docker/overlay2/438615afda3ee0db74f277419380adcb83f92340686904c8b7104d5c82409f9b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/438615afda3ee0db74f277419380adcb83f92340686904c8b7104d5c82409f9b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/438615afda3ee0db74f277419380adcb83f92340686904c8b7104d5c82409f9b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-997805",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-997805/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-997805",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-997805",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-997805",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8838ac99fbe3fe4c9fe647f60f12e972a87928aabd3a210f3a398be9baeeaea0",
	            "SandboxKey": "/var/run/docker/netns/8838ac99fbe3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-997805": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "13fe483902b92417fb08b9a25307f2df4dbcc897dff65b84bbef9f2f680f60c8",
	                    "EndpointID": "8db48d775025987b658fd97692e2ba98a47b3f05f2a7fb48257960ac7ddf18bb",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "f2:87:46:d0:55:1b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-997805",
	                        "c25b25f1d642"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-997805 -n default-k8s-diff-port-997805
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-997805 -n default-k8s-diff-port-997805: exit status 2 (341.578515ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-997805 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-997805 logs -n 25: (1.164118311s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ stop    │ -p default-k8s-diff-port-997805 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable dashboard -p newest-cni-245604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p newest-cni-245604 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ addons  │ enable dashboard -p no-preload-336331 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p no-preload-336331 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:56 UTC │
	│ image   │ newest-cni-245604 image list --format=json                                                                                                                                                                                                           │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ pause   │ -p newest-cni-245604 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-997805 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p default-k8s-diff-port-997805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:56 UTC │
	│ delete  │ -p newest-cni-245604                                                                                                                                                                                                                                 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ delete  │ -p newest-cni-245604                                                                                                                                                                                                                                 │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ delete  │ -p disable-driver-mounts-234978                                                                                                                                                                                                                      │ disable-driver-mounts-234978 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p embed-certs-386191 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:56 UTC │
	│ image   │ old-k8s-version-992336 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ pause   │ -p old-k8s-version-992336 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ delete  │ -p old-k8s-version-992336                                                                                                                                                                                                                            │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:56 UTC │
	│ delete  │ -p old-k8s-version-992336                                                                                                                                                                                                                            │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ image   │ no-preload-336331 image list --format=json                                                                                                                                                                                                           │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ pause   │ -p no-preload-336331 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │                     │
	│ delete  │ -p no-preload-336331                                                                                                                                                                                                                                 │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ delete  │ -p no-preload-336331                                                                                                                                                                                                                                 │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ addons  │ enable metrics-server -p embed-certs-386191 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │                     │
	│ image   │ default-k8s-diff-port-997805 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ stop    │ -p embed-certs-386191 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │                     │
	│ pause   │ -p default-k8s-diff-port-997805 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:55:49
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:55:49.973376  761851 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:55:49.973479  761851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:55:49.973486  761851 out.go:374] Setting ErrFile to fd 2...
	I1202 20:55:49.973492  761851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:55:49.973784  761851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:55:49.974402  761851 out.go:368] Setting JSON to false
	I1202 20:55:49.976053  761851 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9494,"bootTime":1764699456,"procs":379,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:55:49.976153  761851 start.go:143] virtualization: kvm guest
	I1202 20:55:49.979903  761851 out.go:179] * [embed-certs-386191] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 20:55:49.981563  761851 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:55:49.981711  761851 notify.go:221] Checking for updates...
	I1202 20:55:49.985961  761851 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:55:49.989444  761851 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:49.990856  761851 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 20:55:49.992198  761851 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:55:49.994165  761851 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:55:49.996734  761851 config.go:182] Loaded profile config "default-k8s-diff-port-997805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:55:49.996944  761851 config.go:182] Loaded profile config "no-preload-336331": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:55:49.997173  761851 config.go:182] Loaded profile config "old-k8s-version-992336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1202 20:55:49.997373  761851 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:55:50.033364  761851 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 20:55:50.033467  761851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:55:50.114622  761851 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 20:55:50.101227741 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:55:50.114779  761851 docker.go:319] overlay module found
	I1202 20:55:50.117537  761851 out.go:179] * Using the docker driver based on user configuration
	I1202 20:55:50.119145  761851 start.go:309] selected driver: docker
	I1202 20:55:50.119167  761851 start.go:927] validating driver "docker" against <nil>
	I1202 20:55:50.119183  761851 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:55:50.120035  761851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:55:50.211212  761851 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 20:55:50.198488456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:55:50.211445  761851 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 20:55:50.211790  761851 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:55:50.214433  761851 out.go:179] * Using Docker driver with root privileges
	I1202 20:55:50.218243  761851 cni.go:84] Creating CNI manager for ""
	I1202 20:55:50.218353  761851 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:50.218375  761851 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 20:55:50.218508  761851 start.go:353] cluster config:
	{Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:55:50.220045  761851 out.go:179] * Starting "embed-certs-386191" primary control-plane node in "embed-certs-386191" cluster
	I1202 20:55:50.221707  761851 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 20:55:50.223105  761851 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 20:55:50.224334  761851 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:50.224383  761851 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 20:55:50.224379  761851 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 20:55:50.224423  761851 cache.go:65] Caching tarball of preloaded images
	I1202 20:55:50.224531  761851 preload.go:238] Found /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 20:55:50.224544  761851 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 20:55:50.224682  761851 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/config.json ...
	I1202 20:55:50.224706  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/config.json: {Name:mk4df57c1427e88de36c6d265cf4b7b9447ba4a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:50.254982  761851 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 20:55:50.255008  761851 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 20:55:50.255030  761851 cache.go:243] Successfully downloaded all kic artifacts
	I1202 20:55:50.255092  761851 start.go:360] acquireMachinesLock for embed-certs-386191: {Name:mk07b451c8d7193712ed79603183bf03b141f2ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:55:50.255209  761851 start.go:364] duration metric: took 90.207µs to acquireMachinesLock for "embed-certs-386191"
	I1202 20:55:50.255244  761851 start.go:93] Provisioning new machine with config: &{Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:55:50.255372  761851 start.go:125] createHost starting for "" (driver="docker")
	W1202 20:55:47.478474  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:55:49.480219  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:55:48.658867  759377 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:55:48.658893  759377 machine.go:97] duration metric: took 4.363922202s to provisionDockerMachine
	I1202 20:55:48.658908  759377 start.go:293] postStartSetup for "default-k8s-diff-port-997805" (driver="docker")
	I1202 20:55:48.659934  759377 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:55:48.660266  759377 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:55:48.660319  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:48.684270  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:48.800470  759377 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:55:48.806594  759377 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:55:48.806641  759377 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:55:48.806659  759377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 20:55:48.806723  759377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 20:55:48.806832  759377 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem -> 4110322.pem in /etc/ssl/certs
	I1202 20:55:48.807095  759377 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:55:48.817526  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:55:48.843728  759377 start.go:296] duration metric: took 183.799228ms for postStartSetup
	I1202 20:55:48.843844  759377 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:55:48.843886  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:48.867562  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:48.976679  759377 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:55:48.983737  759377 fix.go:56] duration metric: took 5.130755935s for fixHost
	I1202 20:55:48.983779  759377 start.go:83] releasing machines lock for "default-k8s-diff-port-997805", held for 5.130814844s
	I1202 20:55:48.983853  759377 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-997805
	I1202 20:55:49.008951  759377 ssh_runner.go:195] Run: cat /version.json
	I1202 20:55:49.009046  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:49.009048  759377 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:55:49.009136  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:49.034693  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:49.035313  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:49.217584  759377 ssh_runner.go:195] Run: systemctl --version
	I1202 20:55:49.226948  759377 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:55:49.280525  759377 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:55:49.287579  759377 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:55:49.287663  759377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:55:49.299593  759377 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 20:55:49.299624  759377 start.go:496] detecting cgroup driver to use...
	I1202 20:55:49.299667  759377 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 20:55:49.299717  759377 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:55:49.321346  759377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:55:49.340202  759377 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:55:49.340276  759377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:55:49.364580  759377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:55:49.384570  759377 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:55:49.507838  759377 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:55:49.636982  759377 docker.go:234] disabling docker service ...
	I1202 20:55:49.637124  759377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:55:49.660429  759377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:55:49.676580  759377 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:55:49.805919  759377 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:55:49.932552  759377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:55:49.950808  759377 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:55:49.973269  759377 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:55:49.973378  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:49.987382  759377 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 20:55:49.987446  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.001518  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.015622  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.029383  759377 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:55:50.042396  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.055622  759377 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.069706  759377 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:50.082027  759377 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:55:50.093878  759377 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:55:50.106172  759377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:50.241651  759377 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:55:51.093615  759377 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:55:51.093712  759377 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:55:51.098803  759377 start.go:564] Will wait 60s for crictl version
	I1202 20:55:51.098893  759377 ssh_runner.go:195] Run: which crictl
	I1202 20:55:51.103616  759377 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:55:51.134275  759377 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:55:51.134365  759377 ssh_runner.go:195] Run: crio --version
	I1202 20:55:51.176508  759377 ssh_runner.go:195] Run: crio --version
	I1202 20:55:51.212619  759377 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 20:55:51.213954  759377 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-997805 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:55:51.239456  759377 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 20:55:51.247008  759377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:51.258836  759377 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-997805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-997805 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:55:51.259035  759377 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:51.259113  759377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:51.305184  759377 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:51.305211  759377 crio.go:433] Images already preloaded, skipping extraction
	I1202 20:55:51.305279  759377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:51.336679  759377 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:51.336721  759377 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:55:51.336736  759377 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1202 20:55:51.336850  759377 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-997805 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-997805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:55:51.336915  759377 ssh_runner.go:195] Run: crio config
	I1202 20:55:51.395485  759377 cni.go:84] Creating CNI manager for ""
	I1202 20:55:51.395526  759377 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:51.395553  759377 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:55:51.395590  759377 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-997805 NodeName:default-k8s-diff-port-997805 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:55:51.395786  759377 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-997805"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:55:51.395870  759377 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 20:55:51.406735  759377 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:55:51.406822  759377 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:55:51.416228  759377 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1202 20:55:51.430748  759377 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:55:51.448244  759377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1202 20:55:51.463482  759377 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 20:55:51.467906  759377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:51.480393  759377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:51.588830  759377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:55:51.618253  759377 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805 for IP: 192.168.85.2
	I1202 20:55:51.618282  759377 certs.go:195] generating shared ca certs ...
	I1202 20:55:51.618303  759377 certs.go:227] acquiring lock for ca certs: {Name:mkea11924193dd4dc459e16d70207bde383196df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:51.618470  759377 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key
	I1202 20:55:51.618534  759377 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key
	I1202 20:55:51.618547  759377 certs.go:257] generating profile certs ...
	I1202 20:55:51.618661  759377 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/client.key
	I1202 20:55:51.618759  759377 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/apiserver.key.36ffc693
	I1202 20:55:51.618817  759377 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/proxy-client.key
	I1202 20:55:51.618958  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem (1338 bytes)
	W1202 20:55:51.619000  759377 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032_empty.pem, impossibly tiny 0 bytes
	I1202 20:55:51.619010  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 20:55:51.619043  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:55:51.619087  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:55:51.619120  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem (1675 bytes)
	I1202 20:55:51.619173  759377 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:55:51.619958  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:55:51.642775  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:55:51.668086  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:55:51.695111  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:55:51.723055  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 20:55:51.757108  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 20:55:51.782582  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:55:51.803028  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/default-k8s-diff-port-997805/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 20:55:51.823897  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:55:51.845621  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem --> /usr/share/ca-certificates/411032.pem (1338 bytes)
	I1202 20:55:51.866855  759377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /usr/share/ca-certificates/4110322.pem (1708 bytes)
	I1202 20:55:51.890515  759377 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:55:51.906355  759377 ssh_runner.go:195] Run: openssl version
	I1202 20:55:51.914259  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110322.pem && ln -fs /usr/share/ca-certificates/4110322.pem /etc/ssl/certs/4110322.pem"
	I1202 20:55:51.925148  759377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110322.pem
	I1202 20:55:51.929800  759377 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 20:13 /usr/share/ca-certificates/4110322.pem
	I1202 20:55:51.929869  759377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110322.pem
	I1202 20:55:51.972279  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4110322.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:55:51.983418  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:55:51.993784  759377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:51.999249  759377 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:51.999316  759377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:55:52.049373  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:55:52.061515  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/411032.pem && ln -fs /usr/share/ca-certificates/411032.pem /etc/ssl/certs/411032.pem"
	I1202 20:55:52.072126  759377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/411032.pem
	I1202 20:55:52.076862  759377 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 20:13 /usr/share/ca-certificates/411032.pem
	I1202 20:55:52.076956  759377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/411032.pem
	I1202 20:55:52.126642  759377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/411032.pem /etc/ssl/certs/51391683.0"
	I1202 20:55:52.138458  759377 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:55:52.143543  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 20:55:52.198225  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 20:55:52.254754  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 20:55:52.319722  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 20:55:52.380903  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 20:55:52.422910  759377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 20:55:52.483325  759377 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-997805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-997805 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:55:52.483438  759377 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:55:52.483499  759377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:55:52.522620  759377 cri.go:89] found id: "25e14e8feafb6c0d6c5261cd5e507b812e39fcb9c7e196408fe69d780ebbcd1d"
	I1202 20:55:52.522651  759377 cri.go:89] found id: "0c7e2844e2dbdbf5b9ffe8bf4e8d07304b64b059e3d4c965c2010c5d8a39c499"
	I1202 20:55:52.522657  759377 cri.go:89] found id: "81b0ec87511a05a7501d98eb27c52f69372a4b30c4ea523db262c140f9b68cd3"
	I1202 20:55:52.522662  759377 cri.go:89] found id: "e13e6c4d6c5da602ac2e1402a7612205c5a0ceffdccf7618da3035e562a7d9d3"
	I1202 20:55:52.522667  759377 cri.go:89] found id: ""
	I1202 20:55:52.522718  759377 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 20:55:52.539274  759377 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:55:52Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:55:52.539358  759377 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:55:52.550759  759377 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 20:55:52.550911  759377 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 20:55:52.550977  759377 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 20:55:52.562444  759377 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:55:52.563380  759377 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-997805" does not appear in /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:52.563867  759377 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-407427/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-997805" cluster setting kubeconfig missing "default-k8s-diff-port-997805" context setting]
	I1202 20:55:52.564708  759377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:52.567122  759377 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 20:55:52.580423  759377 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1202 20:55:52.580475  759377 kubeadm.go:602] duration metric: took 29.545337ms to restartPrimaryControlPlane
	I1202 20:55:52.580492  759377 kubeadm.go:403] duration metric: took 97.179033ms to StartCluster
	I1202 20:55:52.580515  759377 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:52.580624  759377 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:55:52.582395  759377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:55:52.582737  759377 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:55:52.582982  759377 config.go:182] Loaded profile config "default-k8s-diff-port-997805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:55:52.583044  759377 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:55:52.583145  759377 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:52.583167  759377 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-997805"
	W1202 20:55:52.583180  759377 addons.go:248] addon storage-provisioner should already be in state true
	I1202 20:55:52.583208  759377 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:52.583706  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.583924  759377 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:52.583949  759377 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-997805"
	W1202 20:55:52.583958  759377 addons.go:248] addon dashboard should already be in state true
	I1202 20:55:52.583987  759377 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:52.584470  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.584621  759377 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-997805"
	I1202 20:55:52.584638  759377 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-997805"
	I1202 20:55:52.584909  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.590138  759377 out.go:179] * Verifying Kubernetes components...
	I1202 20:55:52.591985  759377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:52.621520  759377 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-997805"
	W1202 20:55:52.621550  759377 addons.go:248] addon default-storageclass should already be in state true
	I1202 20:55:52.621581  759377 host.go:66] Checking if "default-k8s-diff-port-997805" exists ...
	I1202 20:55:52.621962  759377 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 20:55:52.621973  759377 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:55:52.622100  759377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997805 --format={{.State.Status}}
	I1202 20:55:52.623522  759377 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:52.623542  759377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:55:52.623861  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:52.629794  759377 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 20:55:52.631326  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 20:55:52.631354  759377 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 20:55:52.631441  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:52.650454  759377 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:52.650440  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:52.650477  759377 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:55:52.650539  759377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997805
	I1202 20:55:52.664697  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:52.687593  759377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/default-k8s-diff-port-997805/id_rsa Username:docker}
	I1202 20:55:52.782783  759377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:55:52.788136  759377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:55:52.796186  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 20:55:52.796227  759377 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 20:55:52.805245  759377 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-997805" to be "Ready" ...
	I1202 20:55:52.813493  759377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:55:52.816061  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 20:55:52.816120  759377 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 20:55:52.836609  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 20:55:52.836641  759377 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 20:55:52.858664  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 20:55:52.858695  759377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 20:55:52.881817  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 20:55:52.881850  759377 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 20:55:52.898249  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 20:55:52.898282  759377 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 20:55:52.916317  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 20:55:52.916341  759377 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 20:55:52.934311  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 20:55:52.934421  759377 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 20:55:52.954130  759377 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:55:52.954156  759377 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 20:55:52.971994  759377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:55:50.259730  761851 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1202 20:55:50.260957  761851 start.go:159] libmachine.API.Create for "embed-certs-386191" (driver="docker")
	I1202 20:55:50.261018  761851 client.go:173] LocalClient.Create starting
	I1202 20:55:50.261131  761851 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem
	I1202 20:55:50.261175  761851 main.go:143] libmachine: Decoding PEM data...
	I1202 20:55:50.261199  761851 main.go:143] libmachine: Parsing certificate...
	I1202 20:55:50.261293  761851 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem
	I1202 20:55:50.261321  761851 main.go:143] libmachine: Decoding PEM data...
	I1202 20:55:50.261336  761851 main.go:143] libmachine: Parsing certificate...
	I1202 20:55:50.261828  761851 cli_runner.go:164] Run: docker network inspect embed-certs-386191 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 20:55:50.287353  761851 cli_runner.go:211] docker network inspect embed-certs-386191 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 20:55:50.287436  761851 network_create.go:284] running [docker network inspect embed-certs-386191] to gather additional debugging logs...
	I1202 20:55:50.287467  761851 cli_runner.go:164] Run: docker network inspect embed-certs-386191
	W1202 20:55:50.313420  761851 cli_runner.go:211] docker network inspect embed-certs-386191 returned with exit code 1
	I1202 20:55:50.313458  761851 network_create.go:287] error running [docker network inspect embed-certs-386191]: docker network inspect embed-certs-386191: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-386191 not found
	I1202 20:55:50.313493  761851 network_create.go:289] output of [docker network inspect embed-certs-386191]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-386191 not found
	
	** /stderr **
	I1202 20:55:50.313695  761851 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:55:50.339597  761851 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-acf081edf266 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:04:c0:60:47:62} reservation:<nil>}
	I1202 20:55:50.340759  761851 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9623a21fb225 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:fc:8b:40:15:1b} reservation:<nil>}
	I1202 20:55:50.341559  761851 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f2b79e7e26a5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:c7:f4:38:1c:32} reservation:<nil>}
	I1202 20:55:50.342581  761851 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-be4fb772701b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:87:5f:38:96:b7} reservation:<nil>}
	I1202 20:55:50.343861  761851 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-13fe483902b9 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:a2:a4:21:b2:62:5a} reservation:<nil>}
	I1202 20:55:50.344785  761851 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-65ab470fa0e2 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:16:23:28:7c:c5:24} reservation:<nil>}
	I1202 20:55:50.346012  761851 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fb66a0}
	I1202 20:55:50.346044  761851 network_create.go:124] attempt to create docker network embed-certs-386191 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1202 20:55:50.346142  761851 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-386191 embed-certs-386191
	I1202 20:55:50.449757  761851 network_create.go:108] docker network embed-certs-386191 192.168.103.0/24 created
	I1202 20:55:50.449812  761851 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-386191" container
	I1202 20:55:50.449912  761851 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 20:55:50.476319  761851 cli_runner.go:164] Run: docker volume create embed-certs-386191 --label name.minikube.sigs.k8s.io=embed-certs-386191 --label created_by.minikube.sigs.k8s.io=true
	I1202 20:55:50.544287  761851 oci.go:103] Successfully created a docker volume embed-certs-386191
	I1202 20:55:50.544384  761851 cli_runner.go:164] Run: docker run --rm --name embed-certs-386191-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-386191 --entrypoint /usr/bin/test -v embed-certs-386191:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 20:55:51.390297  761851 oci.go:107] Successfully prepared a docker volume embed-certs-386191
	I1202 20:55:51.390398  761851 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:51.390416  761851 kic.go:194] Starting extracting preloaded images to volume ...
	I1202 20:55:51.390490  761851 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-386191:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	W1202 20:55:51.979014  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:55:54.048006  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:55:54.222552  759377 node_ready.go:49] node "default-k8s-diff-port-997805" is "Ready"
	I1202 20:55:54.222597  759377 node_ready.go:38] duration metric: took 1.417304277s for node "default-k8s-diff-port-997805" to be "Ready" ...
	I1202 20:55:54.222616  759377 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:55:54.222680  759377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:55:55.521273  759377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.733090646s)
	I1202 20:55:55.521348  759377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.707827699s)
	I1202 20:55:55.956240  759377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.984189677s)
	I1202 20:55:55.956260  759377 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.733551247s)
	I1202 20:55:55.956296  759377 api_server.go:72] duration metric: took 3.373517458s to wait for apiserver process to appear ...
	I1202 20:55:55.956305  759377 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:55:55.956329  759377 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1202 20:55:55.957591  759377 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-997805 addons enable metrics-server
	
	I1202 20:55:55.960080  759377 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1202 20:55:55.961425  759377 addons.go:530] duration metric: took 3.378380909s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1202 20:55:55.963108  759377 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 20:55:55.963149  759377 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 20:55:56.456815  759377 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1202 20:55:56.464867  759377 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1202 20:55:56.466374  759377 api_server.go:141] control plane version: v1.34.2
	I1202 20:55:56.466405  759377 api_server.go:131] duration metric: took 510.092ms to wait for apiserver health ...
	I1202 20:55:56.466417  759377 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:55:56.470286  759377 system_pods.go:59] 8 kube-system pods found
	I1202 20:55:56.470321  759377 system_pods.go:61] "coredns-66bc5c9577-jrln7" [37de7399-6357-4f08-9240-fc9e0d884f47] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:55:56.470336  759377 system_pods.go:61] "etcd-default-k8s-diff-port-997805" [446321a2-6abf-495a-8198-da2e43d8c18d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:55:56.470354  759377 system_pods.go:61] "kindnet-rzqpn" [eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 20:55:56.470364  759377 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-997805" [04c7ea7b-9b4e-44ee-8148-107daccf6b7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:55:56.470376  759377 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-997805" [65af3257-fa45-4d8c-bab3-46c0390ab8df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:55:56.470395  759377 system_pods.go:61] "kube-proxy-s2jpn" [407f6b3c-8d8b-47b0-b994-c061eedc6420] Running
	I1202 20:55:56.470403  759377 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-997805" [803a952f-54ba-4326-b70f-907fc7db368e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:55:56.470411  759377 system_pods.go:61] "storage-provisioner" [08893b97-1192-4f4e-8636-8f2ba82c853d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:55:56.470419  759377 system_pods.go:74] duration metric: took 3.994668ms to wait for pod list to return data ...
	I1202 20:55:56.470434  759377 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:55:56.472796  759377 default_sa.go:45] found service account: "default"
	I1202 20:55:56.472821  759377 default_sa.go:55] duration metric: took 2.376879ms for default service account to be created ...
	I1202 20:55:56.472832  759377 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 20:55:56.476530  759377 system_pods.go:86] 8 kube-system pods found
	I1202 20:55:56.476568  759377 system_pods.go:89] "coredns-66bc5c9577-jrln7" [37de7399-6357-4f08-9240-fc9e0d884f47] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:55:56.476586  759377 system_pods.go:89] "etcd-default-k8s-diff-port-997805" [446321a2-6abf-495a-8198-da2e43d8c18d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:55:56.476598  759377 system_pods.go:89] "kindnet-rzqpn" [eabc6de0-0707-4a1b-ab5a-4f6e8255bcfb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 20:55:56.476611  759377 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-997805" [04c7ea7b-9b4e-44ee-8148-107daccf6b7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:55:56.476622  759377 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-997805" [65af3257-fa45-4d8c-bab3-46c0390ab8df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:55:56.476636  759377 system_pods.go:89] "kube-proxy-s2jpn" [407f6b3c-8d8b-47b0-b994-c061eedc6420] Running
	I1202 20:55:56.476644  759377 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-997805" [803a952f-54ba-4326-b70f-907fc7db368e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:55:56.476652  759377 system_pods.go:89] "storage-provisioner" [08893b97-1192-4f4e-8636-8f2ba82c853d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:55:56.476666  759377 system_pods.go:126] duration metric: took 3.826088ms to wait for k8s-apps to be running ...
	I1202 20:55:56.476679  759377 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 20:55:56.476731  759377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:55:56.496595  759377 system_svc.go:56] duration metric: took 19.904103ms WaitForService to wait for kubelet
	I1202 20:55:56.496628  759377 kubeadm.go:587] duration metric: took 3.913848958s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:55:56.496651  759377 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:55:56.501320  759377 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 20:55:56.501357  759377 node_conditions.go:123] node cpu capacity is 8
	I1202 20:55:56.501378  759377 node_conditions.go:105] duration metric: took 4.719966ms to run NodePressure ...
	I1202 20:55:56.501394  759377 start.go:242] waiting for startup goroutines ...
	I1202 20:55:56.501406  759377 start.go:247] waiting for cluster config update ...
	I1202 20:55:56.501422  759377 start.go:256] writing updated cluster config ...
	I1202 20:55:56.501764  759377 ssh_runner.go:195] Run: rm -f paused
	I1202 20:55:56.507506  759377 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:55:56.511978  759377 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jrln7" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 20:55:58.518638  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:55:55.882395  761851 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-386191:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.491855191s)
	I1202 20:55:55.882432  761851 kic.go:203] duration metric: took 4.49201135s to extract preloaded images to volume ...
	W1202 20:55:55.882649  761851 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1202 20:55:55.882730  761851 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1202 20:55:55.882796  761851 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 20:55:55.970786  761851 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-386191 --name embed-certs-386191 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-386191 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-386191 --network embed-certs-386191 --ip 192.168.103.2 --volume embed-certs-386191:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 20:55:56.322797  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Running}}
	I1202 20:55:56.346318  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:55:56.369508  761851 cli_runner.go:164] Run: docker exec embed-certs-386191 stat /var/lib/dpkg/alternatives/iptables
	I1202 20:55:56.426161  761851 oci.go:144] the created container "embed-certs-386191" has a running status.
	I1202 20:55:56.426198  761851 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa...
	I1202 20:55:56.605690  761851 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 20:55:56.639247  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:55:56.661049  761851 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 20:55:56.661086  761851 kic_runner.go:114] Args: [docker exec --privileged embed-certs-386191 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 20:55:56.743919  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:55:56.771200  761851 machine.go:94] provisionDockerMachine start ...
	I1202 20:55:56.771338  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:56.796209  761851 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:56.796568  761851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33513 <nil> <nil>}
	I1202 20:55:56.796593  761851 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:55:56.950615  761851 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-386191
	
	I1202 20:55:56.950657  761851 ubuntu.go:182] provisioning hostname "embed-certs-386191"
	I1202 20:55:56.950733  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:56.973211  761851 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:56.973537  761851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33513 <nil> <nil>}
	I1202 20:55:56.973561  761851 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-386191 && echo "embed-certs-386191" | sudo tee /etc/hostname
	I1202 20:55:57.141391  761851 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-386191
	
	I1202 20:55:57.141500  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:57.162911  761851 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:57.163198  761851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33513 <nil> <nil>}
	I1202 20:55:57.163228  761851 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-386191' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-386191/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-386191' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:55:57.310513  761851 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:55:57.310553  761851 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-407427/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-407427/.minikube}
	I1202 20:55:57.310589  761851 ubuntu.go:190] setting up certificates
	I1202 20:55:57.310609  761851 provision.go:84] configureAuth start
	I1202 20:55:57.310699  761851 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-386191
	I1202 20:55:57.331293  761851 provision.go:143] copyHostCerts
	I1202 20:55:57.331361  761851 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem, removing ...
	I1202 20:55:57.331377  761851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem
	I1202 20:55:57.331457  761851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem (1123 bytes)
	I1202 20:55:57.331608  761851 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem, removing ...
	I1202 20:55:57.331619  761851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem
	I1202 20:55:57.331661  761851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem (1675 bytes)
	I1202 20:55:57.331806  761851 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem, removing ...
	I1202 20:55:57.331820  761851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem
	I1202 20:55:57.331861  761851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem (1082 bytes)
	I1202 20:55:57.331969  761851 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem org=jenkins.embed-certs-386191 san=[127.0.0.1 192.168.103.2 embed-certs-386191 localhost minikube]
	I1202 20:55:57.478343  761851 provision.go:177] copyRemoteCerts
	I1202 20:55:57.478412  761851 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:55:57.478461  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:57.503684  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:55:57.613653  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:55:57.638025  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1202 20:55:57.660295  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 20:55:57.684474  761851 provision.go:87] duration metric: took 373.842939ms to configureAuth
	I1202 20:55:57.684512  761851 ubuntu.go:206] setting minikube options for container-runtime
	I1202 20:55:57.684722  761851 config.go:182] Loaded profile config "embed-certs-386191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:55:57.684859  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:57.705791  761851 main.go:143] libmachine: Using SSH client type: native
	I1202 20:55:57.706104  761851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33513 <nil> <nil>}
	I1202 20:55:57.706127  761851 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:55:58.017837  761851 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:55:58.017867  761851 machine.go:97] duration metric: took 1.246644154s to provisionDockerMachine
	I1202 20:55:58.017881  761851 client.go:176] duration metric: took 7.756854866s to LocalClient.Create
	I1202 20:55:58.017904  761851 start.go:167] duration metric: took 7.756953433s to libmachine.API.Create "embed-certs-386191"
	I1202 20:55:58.017914  761851 start.go:293] postStartSetup for "embed-certs-386191" (driver="docker")
	I1202 20:55:58.017926  761851 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:55:58.017993  761851 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:55:58.018051  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:58.040966  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:55:58.164646  761851 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:55:58.169173  761851 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:55:58.169218  761851 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:55:58.169234  761851 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 20:55:58.169292  761851 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 20:55:58.169398  761851 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem -> 4110322.pem in /etc/ssl/certs
	I1202 20:55:58.169534  761851 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:55:58.178343  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:55:58.201537  761851 start.go:296] duration metric: took 183.605841ms for postStartSetup
	I1202 20:55:58.201980  761851 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-386191
	I1202 20:55:58.222381  761851 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/config.json ...
	I1202 20:55:58.222725  761851 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:55:58.222779  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:58.246974  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:55:58.349308  761851 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:55:58.354335  761851 start.go:128] duration metric: took 8.098942472s to createHost
	I1202 20:55:58.354367  761851 start.go:83] releasing machines lock for "embed-certs-386191", held for 8.099141281s
	I1202 20:55:58.354452  761851 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-386191
	I1202 20:55:58.375692  761851 ssh_runner.go:195] Run: cat /version.json
	I1202 20:55:58.375743  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:58.375778  761851 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:55:58.375875  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:55:58.399444  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:55:58.401096  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:55:58.567709  761851 ssh_runner.go:195] Run: systemctl --version
	I1202 20:55:58.576291  761851 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:55:58.616262  761851 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:55:58.621961  761851 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:55:58.622044  761851 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:55:58.651183  761851 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 20:55:58.651217  761851 start.go:496] detecting cgroup driver to use...
	I1202 20:55:58.651265  761851 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 20:55:58.651331  761851 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:55:58.670441  761851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:55:58.684478  761851 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:55:58.684542  761851 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:55:58.704480  761851 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:55:58.725624  761851 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:55:58.831744  761851 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:55:58.927526  761851 docker.go:234] disabling docker service ...
	I1202 20:55:58.927588  761851 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:55:58.947085  761851 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:55:58.961716  761851 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:55:59.059830  761851 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:55:59.155836  761851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:55:59.170575  761851 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:55:59.187647  761851 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:55:59.187711  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.199691  761851 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 20:55:59.199752  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.210377  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.221666  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.233039  761851 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:55:59.242836  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.252564  761851 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.268580  761851 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:55:59.279302  761851 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:55:59.288550  761851 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:55:59.297166  761851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:55:59.384478  761851 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:55:59.534012  761851 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:55:59.534100  761851 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:55:59.538865  761851 start.go:564] Will wait 60s for crictl version
	I1202 20:55:59.538929  761851 ssh_runner.go:195] Run: which crictl
	I1202 20:55:59.542822  761851 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:55:59.570175  761851 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:55:59.570275  761851 ssh_runner.go:195] Run: crio --version
	I1202 20:55:59.600365  761851 ssh_runner.go:195] Run: crio --version
	I1202 20:55:59.632281  761851 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 20:55:59.633569  761851 cli_runner.go:164] Run: docker network inspect embed-certs-386191 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:55:59.653989  761851 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1202 20:55:59.659705  761851 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:59.673939  761851 kubeadm.go:884] updating cluster {Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:55:59.674148  761851 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:55:59.674231  761851 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:59.721572  761851 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:59.721623  761851 crio.go:433] Images already preloaded, skipping extraction
	I1202 20:55:59.721807  761851 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:55:59.763726  761851 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:55:59.763753  761851 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:55:59.763763  761851 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1202 20:55:59.763877  761851 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-386191 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:55:59.763974  761851 ssh_runner.go:195] Run: crio config
	I1202 20:55:59.830764  761851 cni.go:84] Creating CNI manager for ""
	I1202 20:55:59.830790  761851 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:55:59.830809  761851 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:55:59.830832  761851 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-386191 NodeName:embed-certs-386191 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:55:59.830950  761851 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-386191"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:55:59.831035  761851 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 20:55:59.841880  761851 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:55:59.841954  761851 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:55:59.852027  761851 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1202 20:55:59.869099  761851 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:55:59.889821  761851 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1202 20:55:59.907811  761851 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1202 20:55:59.913347  761851 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:55:59.927373  761851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W1202 20:55:56.478639  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:55:58.978346  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:56:00.050556  761851 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:56:00.077300  761851 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191 for IP: 192.168.103.2
	I1202 20:56:00.077325  761851 certs.go:195] generating shared ca certs ...
	I1202 20:56:00.077348  761851 certs.go:227] acquiring lock for ca certs: {Name:mkea11924193dd4dc459e16d70207bde383196df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.077530  761851 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key
	I1202 20:56:00.077575  761851 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key
	I1202 20:56:00.077588  761851 certs.go:257] generating profile certs ...
	I1202 20:56:00.077664  761851 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.key
	I1202 20:56:00.077682  761851 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.crt with IP's: []
	I1202 20:56:00.252632  761851 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.crt ...
	I1202 20:56:00.252663  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.crt: {Name:mk9d10e4646efb676095250174819771b143a8ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.252877  761851 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.key ...
	I1202 20:56:00.252896  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.key: {Name:mk09798c33ea1ea9f8eb08ebf47349e244c0760e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.253023  761851 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key.1b423d29
	I1202 20:56:00.253048  761851 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt.1b423d29 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1202 20:56:00.432017  761851 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt.1b423d29 ...
	I1202 20:56:00.432052  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt.1b423d29: {Name:mk6d91134ec48be46c0e886b478e71e1794c3cdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.432278  761851 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key.1b423d29 ...
	I1202 20:56:00.432302  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key.1b423d29: {Name:mk97fa0403fe534a503bf999364704991b597622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.432413  761851 certs.go:382] copying /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt.1b423d29 -> /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt
	I1202 20:56:00.432512  761851 certs.go:386] copying /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key.1b423d29 -> /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key
	I1202 20:56:00.432593  761851 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.key
	I1202 20:56:00.432619  761851 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.crt with IP's: []
	I1202 20:56:00.527766  761851 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.crt ...
	I1202 20:56:00.527802  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.crt: {Name:mke9848302a1327d00a26fb35bc8d56284a1ca08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.528029  761851 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.key ...
	I1202 20:56:00.528053  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.key: {Name:mk5b412430aa6855d80ede6a2641ba2256c9a484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:00.528324  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem (1338 bytes)
	W1202 20:56:00.528374  761851 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032_empty.pem, impossibly tiny 0 bytes
	I1202 20:56:00.528390  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 20:56:00.528423  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:56:00.528455  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:56:00.528493  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem (1675 bytes)
	I1202 20:56:00.528552  761851 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:56:00.529432  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:56:00.554691  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:56:00.580499  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:56:00.606002  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:56:00.630389  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1202 20:56:00.655553  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 20:56:00.679419  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:56:00.704325  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 20:56:00.729255  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /usr/share/ca-certificates/4110322.pem (1708 bytes)
	I1202 20:56:00.757910  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:56:00.782959  761851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem --> /usr/share/ca-certificates/411032.pem (1338 bytes)
	I1202 20:56:00.808564  761851 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:56:00.828291  761851 ssh_runner.go:195] Run: openssl version
	I1202 20:56:00.836796  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110322.pem && ln -fs /usr/share/ca-certificates/4110322.pem /etc/ssl/certs/4110322.pem"
	I1202 20:56:00.848469  761851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110322.pem
	I1202 20:56:00.853715  761851 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 20:13 /usr/share/ca-certificates/4110322.pem
	I1202 20:56:00.853790  761851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110322.pem
	I1202 20:56:00.905576  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4110322.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:56:00.918463  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:56:00.930339  761851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:56:00.935452  761851 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:56:00.935522  761851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:56:00.990051  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:56:01.002960  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/411032.pem && ln -fs /usr/share/ca-certificates/411032.pem /etc/ssl/certs/411032.pem"
	I1202 20:56:01.013994  761851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/411032.pem
	I1202 20:56:01.019737  761851 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 20:13 /usr/share/ca-certificates/411032.pem
	I1202 20:56:01.019798  761851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/411032.pem
	I1202 20:56:01.062700  761851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/411032.pem /etc/ssl/certs/51391683.0"
	I1202 20:56:01.074487  761851 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:56:01.079958  761851 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 20:56:01.080033  761851 kubeadm.go:401] StartCluster: {Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:56:01.080164  761851 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:56:01.080231  761851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:56:01.119713  761851 cri.go:89] found id: ""
	I1202 20:56:01.122354  761851 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:56:01.160024  761851 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 20:56:01.174466  761851 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 20:56:01.174517  761851 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 20:56:01.186198  761851 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 20:56:01.186294  761851 kubeadm.go:158] found existing configuration files:
	
	I1202 20:56:01.186361  761851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 20:56:01.201548  761851 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 20:56:01.201623  761851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 20:56:01.214153  761851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 20:56:01.225107  761851 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 20:56:01.225225  761851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 20:56:01.236050  761851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 20:56:01.247714  761851 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 20:56:01.247785  761851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 20:56:01.259129  761851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 20:56:01.270914  761851 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 20:56:01.270981  761851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 20:56:01.283320  761851 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 20:56:01.344042  761851 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1202 20:56:01.344150  761851 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 20:56:01.374696  761851 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 20:56:01.374786  761851 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1202 20:56:01.374832  761851 kubeadm.go:319] OS: Linux
	I1202 20:56:01.374904  761851 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 20:56:01.374965  761851 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 20:56:01.375027  761851 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 20:56:01.375100  761851 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 20:56:01.375165  761851 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 20:56:01.375227  761851 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 20:56:01.375295  761851 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 20:56:01.375351  761851 kubeadm.go:319] CGROUPS_IO: enabled
	I1202 20:56:01.461671  761851 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 20:56:01.461847  761851 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 20:56:01.462101  761851 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 20:56:01.473475  761851 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1202 20:56:00.519234  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:03.019288  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:56:01.478718  761851 out.go:252]   - Generating certificates and keys ...
	I1202 20:56:01.478829  761851 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 20:56:01.478911  761851 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 20:56:01.668758  761851 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 20:56:01.829895  761851 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 20:56:02.005376  761851 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 20:56:02.862909  761851 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 20:56:03.307052  761851 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 20:56:03.307703  761851 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-386191 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1202 20:56:03.383959  761851 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 20:56:03.384496  761851 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-386191 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1202 20:56:03.508307  761851 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 20:56:04.670556  761851 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 20:56:04.823930  761851 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 20:56:04.824007  761851 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	W1202 20:56:00.979309  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:56:02.980313  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:56:05.478729  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:56:05.205466  761851 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 20:56:05.375427  761851 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 20:56:05.434193  761851 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 20:56:05.863197  761851 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 20:56:06.053990  761851 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 20:56:06.054504  761851 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 20:56:06.058651  761851 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1202 20:56:05.517785  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:07.518439  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:56:06.060126  761851 out.go:252]   - Booting up control plane ...
	I1202 20:56:06.060244  761851 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 20:56:06.060364  761851 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 20:56:06.061268  761851 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 20:56:06.095037  761851 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 20:56:06.095189  761851 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 20:56:06.102515  761851 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 20:56:06.102696  761851 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 20:56:06.102769  761851 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 20:56:06.205490  761851 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 20:56:06.205715  761851 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 20:56:07.205674  761851 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001810301s
	I1202 20:56:07.209848  761851 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 20:56:07.210052  761851 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1202 20:56:07.210217  761851 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 20:56:07.210338  761851 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1202 20:56:08.756010  761851 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.546069674s
	I1202 20:56:09.869674  761851 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.659323153s
	W1202 20:56:07.979740  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	W1202 20:56:10.478689  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:56:11.711917  761851 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502061899s
	I1202 20:56:11.728157  761851 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 20:56:11.740906  761851 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 20:56:11.753231  761851 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 20:56:11.753530  761851 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-386191 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 20:56:11.764705  761851 kubeadm.go:319] [bootstrap-token] Using token: c8uju2.57r80hlp0isn29k2
	I1202 20:56:11.766183  761851 out.go:252]   - Configuring RBAC rules ...
	I1202 20:56:11.766339  761851 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 20:56:11.770506  761851 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 20:56:11.777525  761851 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 20:56:11.780772  761851 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 20:56:11.785459  761851 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 20:56:11.788963  761851 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 20:56:12.119080  761851 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 20:56:12.539952  761851 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 20:56:13.118875  761851 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 20:56:13.119856  761851 kubeadm.go:319] 
	I1202 20:56:13.119972  761851 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 20:56:13.119991  761851 kubeadm.go:319] 
	I1202 20:56:13.120096  761851 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 20:56:13.120106  761851 kubeadm.go:319] 
	I1202 20:56:13.120132  761851 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 20:56:13.120189  761851 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 20:56:13.120239  761851 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 20:56:13.120250  761851 kubeadm.go:319] 
	I1202 20:56:13.120296  761851 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 20:56:13.120303  761851 kubeadm.go:319] 
	I1202 20:56:13.120350  761851 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 20:56:13.120356  761851 kubeadm.go:319] 
	I1202 20:56:13.120405  761851 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 20:56:13.120480  761851 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 20:56:13.120550  761851 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 20:56:13.120559  761851 kubeadm.go:319] 
	I1202 20:56:13.120655  761851 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 20:56:13.120760  761851 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 20:56:13.120770  761851 kubeadm.go:319] 
	I1202 20:56:13.120947  761851 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token c8uju2.57r80hlp0isn29k2 \
	I1202 20:56:13.121116  761851 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:48349ffa2369b87cd15480ece56edb1c5c5d97a828930f9dfbf1d1ceccf80ff4 \
	I1202 20:56:13.121150  761851 kubeadm.go:319] 	--control-plane 
	I1202 20:56:13.121158  761851 kubeadm.go:319] 
	I1202 20:56:13.121277  761851 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 20:56:13.121292  761851 kubeadm.go:319] 
	I1202 20:56:13.121403  761851 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token c8uju2.57r80hlp0isn29k2 \
	I1202 20:56:13.121546  761851 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:48349ffa2369b87cd15480ece56edb1c5c5d97a828930f9dfbf1d1ceccf80ff4 
	I1202 20:56:13.124563  761851 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1202 20:56:13.124664  761851 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 20:56:13.124688  761851 cni.go:84] Creating CNI manager for ""
	I1202 20:56:13.124700  761851 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:56:13.126500  761851 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1202 20:56:10.017702  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:12.018270  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:56:13.128206  761851 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 20:56:13.133011  761851 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1202 20:56:13.133036  761851 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 20:56:13.147210  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 20:56:13.367880  761851 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 20:56:13.368008  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:13.368037  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-386191 minikube.k8s.io/updated_at=2025_12_02T20_56_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92 minikube.k8s.io/name=embed-certs-386191 minikube.k8s.io/primary=true
	I1202 20:56:13.378170  761851 ops.go:34] apiserver oom_adj: -16
	I1202 20:56:13.456213  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:13.956791  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:14.456911  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:14.957002  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1202 20:56:12.481885  754876 pod_ready.go:104] pod "coredns-7d764666f9-ghxk6" is not "Ready", error: <nil>
	I1202 20:56:14.478647  754876 pod_ready.go:94] pod "coredns-7d764666f9-ghxk6" is "Ready"
	I1202 20:56:14.478679  754876 pod_ready.go:86] duration metric: took 33.50633852s for pod "coredns-7d764666f9-ghxk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.481510  754876 pod_ready.go:83] waiting for pod "etcd-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.487252  754876 pod_ready.go:94] pod "etcd-no-preload-336331" is "Ready"
	I1202 20:56:14.487284  754876 pod_ready.go:86] duration metric: took 5.742661ms for pod "etcd-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.489709  754876 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.493975  754876 pod_ready.go:94] pod "kube-apiserver-no-preload-336331" is "Ready"
	I1202 20:56:14.494030  754876 pod_ready.go:86] duration metric: took 4.293005ms for pod "kube-apiserver-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.496555  754876 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.676017  754876 pod_ready.go:94] pod "kube-controller-manager-no-preload-336331" is "Ready"
	I1202 20:56:14.676054  754876 pod_ready.go:86] duration metric: took 179.468852ms for pod "kube-controller-manager-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:14.876507  754876 pod_ready.go:83] waiting for pod "kube-proxy-qc2v9" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:15.276156  754876 pod_ready.go:94] pod "kube-proxy-qc2v9" is "Ready"
	I1202 20:56:15.276184  754876 pod_ready.go:86] duration metric: took 399.652639ms for pod "kube-proxy-qc2v9" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:15.476929  754876 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:15.876785  754876 pod_ready.go:94] pod "kube-scheduler-no-preload-336331" is "Ready"
	I1202 20:56:15.876821  754876 pod_ready.go:86] duration metric: took 399.859554ms for pod "kube-scheduler-no-preload-336331" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:15.876837  754876 pod_ready.go:40] duration metric: took 34.909444308s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:56:15.923408  754876 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1202 20:56:15.925124  754876 out.go:179] * Done! kubectl is now configured to use "no-preload-336331" cluster and "default" namespace by default
	I1202 20:56:15.457186  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:15.957341  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:16.456356  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:16.956786  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:17.457273  761851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:56:17.529683  761851 kubeadm.go:1114] duration metric: took 4.161789754s to wait for elevateKubeSystemPrivileges
	I1202 20:56:17.529733  761851 kubeadm.go:403] duration metric: took 16.449707403s to StartCluster
	I1202 20:56:17.529758  761851 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:17.529828  761851 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:56:17.531386  761851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:56:17.531613  761851 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 20:56:17.531617  761851 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:56:17.531699  761851 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:56:17.531801  761851 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-386191"
	I1202 20:56:17.531817  761851 config.go:182] Loaded profile config "embed-certs-386191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:56:17.531839  761851 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-386191"
	I1202 20:56:17.531817  761851 addons.go:70] Setting default-storageclass=true in profile "embed-certs-386191"
	I1202 20:56:17.531877  761851 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-386191"
	I1202 20:56:17.531882  761851 host.go:66] Checking if "embed-certs-386191" exists ...
	I1202 20:56:17.532342  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:56:17.532507  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:56:17.534531  761851 out.go:179] * Verifying Kubernetes components...
	I1202 20:56:17.535950  761851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:56:17.558800  761851 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:56:17.560025  761851 addons.go:239] Setting addon default-storageclass=true in "embed-certs-386191"
	I1202 20:56:17.560084  761851 host.go:66] Checking if "embed-certs-386191" exists ...
	I1202 20:56:17.560580  761851 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:56:17.561225  761851 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:56:17.561246  761851 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:56:17.561324  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:56:17.590711  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:56:17.592956  761851 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:56:17.592992  761851 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:56:17.593051  761851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:56:17.617931  761851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:56:17.638614  761851 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 20:56:17.681673  761851 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:56:17.712144  761851 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:56:17.735866  761851 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:56:17.815035  761851 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1202 20:56:17.816483  761851 node_ready.go:35] waiting up to 6m0s for node "embed-certs-386191" to be "Ready" ...
	I1202 20:56:18.003767  761851 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1202 20:56:14.018515  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:16.020009  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:18.517905  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:56:18.004793  761851 addons.go:530] duration metric: took 473.08842ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1202 20:56:18.319554  761851 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-386191" context rescaled to 1 replicas
	W1202 20:56:19.820111  761851 node_ready.go:57] node "embed-certs-386191" has "Ready":"False" status (will retry)
	W1202 20:56:21.019501  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:23.518373  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:22.320036  761851 node_ready.go:57] node "embed-certs-386191" has "Ready":"False" status (will retry)
	W1202 20:56:24.320559  761851 node_ready.go:57] node "embed-certs-386191" has "Ready":"False" status (will retry)
	W1202 20:56:26.018767  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:28.019223  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	W1202 20:56:26.320730  761851 node_ready.go:57] node "embed-certs-386191" has "Ready":"False" status (will retry)
	W1202 20:56:28.820145  761851 node_ready.go:57] node "embed-certs-386191" has "Ready":"False" status (will retry)
	W1202 20:56:30.519140  759377 pod_ready.go:104] pod "coredns-66bc5c9577-jrln7" is not "Ready", error: <nil>
	I1202 20:56:32.019528  759377 pod_ready.go:94] pod "coredns-66bc5c9577-jrln7" is "Ready"
	I1202 20:56:32.019562  759377 pod_ready.go:86] duration metric: took 35.507552593s for pod "coredns-66bc5c9577-jrln7" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.022973  759377 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.027973  759377 pod_ready.go:94] pod "etcd-default-k8s-diff-port-997805" is "Ready"
	I1202 20:56:32.028009  759377 pod_ready.go:86] duration metric: took 5.002878ms for pod "etcd-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.030436  759377 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.035486  759377 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-997805" is "Ready"
	I1202 20:56:32.035517  759377 pod_ready.go:86] duration metric: took 5.054721ms for pod "kube-apiserver-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.038168  759377 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.216544  759377 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-997805" is "Ready"
	I1202 20:56:32.216573  759377 pod_ready.go:86] duration metric: took 178.377154ms for pod "kube-controller-manager-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.417009  759377 pod_ready.go:83] waiting for pod "kube-proxy-s2jpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.816568  759377 pod_ready.go:94] pod "kube-proxy-s2jpn" is "Ready"
	I1202 20:56:32.816591  759377 pod_ready.go:86] duration metric: took 399.551658ms for pod "kube-proxy-s2jpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:33.016734  759377 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:33.415885  759377 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-997805" is "Ready"
	I1202 20:56:33.415912  759377 pod_ready.go:86] duration metric: took 399.150299ms for pod "kube-scheduler-default-k8s-diff-port-997805" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:33.415928  759377 pod_ready.go:40] duration metric: took 36.908377916s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:56:33.462852  759377 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 20:56:33.464589  759377 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-997805" cluster and "default" namespace by default
	I1202 20:56:30.319943  761851 node_ready.go:49] node "embed-certs-386191" is "Ready"
	I1202 20:56:30.319978  761851 node_ready.go:38] duration metric: took 12.503459453s for node "embed-certs-386191" to be "Ready" ...
	I1202 20:56:30.319996  761851 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:56:30.320050  761851 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:56:30.333122  761851 api_server.go:72] duration metric: took 12.801460339s to wait for apiserver process to appear ...
	I1202 20:56:30.333155  761851 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:56:30.333181  761851 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 20:56:30.338949  761851 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1202 20:56:30.340352  761851 api_server.go:141] control plane version: v1.34.2
	I1202 20:56:30.340387  761851 api_server.go:131] duration metric: took 7.223849ms to wait for apiserver health ...
	I1202 20:56:30.340400  761851 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:56:30.345084  761851 system_pods.go:59] 8 kube-system pods found
	I1202 20:56:30.345142  761851 system_pods.go:61] "coredns-66bc5c9577-q6l9x" [e7159eb1-3cde-437a-99e3-760c9c397977] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:56:30.345152  761851 system_pods.go:61] "etcd-embed-certs-386191" [5ca26226-7c69-4d6b-a513-2187c089d96c] Running
	I1202 20:56:30.345160  761851 system_pods.go:61] "kindnet-x9jsh" [410369de-877d-46e5-8f7c-cd8076d1d2f5] Running
	I1202 20:56:30.345166  761851 system_pods.go:61] "kube-apiserver-embed-certs-386191" [dd2a96bf-6afe-43b8-b01d-a88496c7c9b9] Running
	I1202 20:56:30.345173  761851 system_pods.go:61] "kube-controller-manager-embed-certs-386191" [ba51ea44-285e-494e-a173-75f314cb6a5e] Running
	I1202 20:56:30.345178  761851 system_pods.go:61] "kube-proxy-854r8" [6c9652b0-217c-466f-9345-7364f0e39936] Running
	I1202 20:56:30.345185  761851 system_pods.go:61] "kube-scheduler-embed-certs-386191" [abe313cb-061b-4d65-b0e0-f369aacdc1c4] Running
	I1202 20:56:30.345195  761851 system_pods.go:61] "storage-provisioner" [d37e55bb-bb1f-4659-a9c5-14d47011bd23] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:56:30.345205  761851 system_pods.go:74] duration metric: took 4.796405ms to wait for pod list to return data ...
	I1202 20:56:30.345227  761851 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:56:30.348608  761851 default_sa.go:45] found service account: "default"
	I1202 20:56:30.348639  761851 default_sa.go:55] duration metric: took 3.40167ms for default service account to be created ...
	I1202 20:56:30.348652  761851 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 20:56:30.352973  761851 system_pods.go:86] 8 kube-system pods found
	I1202 20:56:30.353004  761851 system_pods.go:89] "coredns-66bc5c9577-q6l9x" [e7159eb1-3cde-437a-99e3-760c9c397977] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:56:30.353011  761851 system_pods.go:89] "etcd-embed-certs-386191" [5ca26226-7c69-4d6b-a513-2187c089d96c] Running
	I1202 20:56:30.353017  761851 system_pods.go:89] "kindnet-x9jsh" [410369de-877d-46e5-8f7c-cd8076d1d2f5] Running
	I1202 20:56:30.353021  761851 system_pods.go:89] "kube-apiserver-embed-certs-386191" [dd2a96bf-6afe-43b8-b01d-a88496c7c9b9] Running
	I1202 20:56:30.353025  761851 system_pods.go:89] "kube-controller-manager-embed-certs-386191" [ba51ea44-285e-494e-a173-75f314cb6a5e] Running
	I1202 20:56:30.353028  761851 system_pods.go:89] "kube-proxy-854r8" [6c9652b0-217c-466f-9345-7364f0e39936] Running
	I1202 20:56:30.353031  761851 system_pods.go:89] "kube-scheduler-embed-certs-386191" [abe313cb-061b-4d65-b0e0-f369aacdc1c4] Running
	I1202 20:56:30.353036  761851 system_pods.go:89] "storage-provisioner" [d37e55bb-bb1f-4659-a9c5-14d47011bd23] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:56:30.353064  761851 retry.go:31] will retry after 268.066085ms: missing components: kube-dns
	I1202 20:56:30.626568  761851 system_pods.go:86] 8 kube-system pods found
	I1202 20:56:30.626621  761851 system_pods.go:89] "coredns-66bc5c9577-q6l9x" [e7159eb1-3cde-437a-99e3-760c9c397977] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:56:30.626630  761851 system_pods.go:89] "etcd-embed-certs-386191" [5ca26226-7c69-4d6b-a513-2187c089d96c] Running
	I1202 20:56:30.626639  761851 system_pods.go:89] "kindnet-x9jsh" [410369de-877d-46e5-8f7c-cd8076d1d2f5] Running
	I1202 20:56:30.626645  761851 system_pods.go:89] "kube-apiserver-embed-certs-386191" [dd2a96bf-6afe-43b8-b01d-a88496c7c9b9] Running
	I1202 20:56:30.626656  761851 system_pods.go:89] "kube-controller-manager-embed-certs-386191" [ba51ea44-285e-494e-a173-75f314cb6a5e] Running
	I1202 20:56:30.626662  761851 system_pods.go:89] "kube-proxy-854r8" [6c9652b0-217c-466f-9345-7364f0e39936] Running
	I1202 20:56:30.626675  761851 system_pods.go:89] "kube-scheduler-embed-certs-386191" [abe313cb-061b-4d65-b0e0-f369aacdc1c4] Running
	I1202 20:56:30.626687  761851 system_pods.go:89] "storage-provisioner" [d37e55bb-bb1f-4659-a9c5-14d47011bd23] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:56:30.626708  761851 retry.go:31] will retry after 295.685816ms: missing components: kube-dns
	I1202 20:56:30.926543  761851 system_pods.go:86] 8 kube-system pods found
	I1202 20:56:30.926598  761851 system_pods.go:89] "coredns-66bc5c9577-q6l9x" [e7159eb1-3cde-437a-99e3-760c9c397977] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:56:30.926608  761851 system_pods.go:89] "etcd-embed-certs-386191" [5ca26226-7c69-4d6b-a513-2187c089d96c] Running
	I1202 20:56:30.926615  761851 system_pods.go:89] "kindnet-x9jsh" [410369de-877d-46e5-8f7c-cd8076d1d2f5] Running
	I1202 20:56:30.926621  761851 system_pods.go:89] "kube-apiserver-embed-certs-386191" [dd2a96bf-6afe-43b8-b01d-a88496c7c9b9] Running
	I1202 20:56:30.926628  761851 system_pods.go:89] "kube-controller-manager-embed-certs-386191" [ba51ea44-285e-494e-a173-75f314cb6a5e] Running
	I1202 20:56:30.926634  761851 system_pods.go:89] "kube-proxy-854r8" [6c9652b0-217c-466f-9345-7364f0e39936] Running
	I1202 20:56:30.926639  761851 system_pods.go:89] "kube-scheduler-embed-certs-386191" [abe313cb-061b-4d65-b0e0-f369aacdc1c4] Running
	I1202 20:56:30.926646  761851 system_pods.go:89] "storage-provisioner" [d37e55bb-bb1f-4659-a9c5-14d47011bd23] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:56:30.926671  761851 retry.go:31] will retry after 481.864787ms: missing components: kube-dns
	I1202 20:56:31.413061  761851 system_pods.go:86] 8 kube-system pods found
	I1202 20:56:31.413118  761851 system_pods.go:89] "coredns-66bc5c9577-q6l9x" [e7159eb1-3cde-437a-99e3-760c9c397977] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:56:31.413126  761851 system_pods.go:89] "etcd-embed-certs-386191" [5ca26226-7c69-4d6b-a513-2187c089d96c] Running
	I1202 20:56:31.413131  761851 system_pods.go:89] "kindnet-x9jsh" [410369de-877d-46e5-8f7c-cd8076d1d2f5] Running
	I1202 20:56:31.413134  761851 system_pods.go:89] "kube-apiserver-embed-certs-386191" [dd2a96bf-6afe-43b8-b01d-a88496c7c9b9] Running
	I1202 20:56:31.413141  761851 system_pods.go:89] "kube-controller-manager-embed-certs-386191" [ba51ea44-285e-494e-a173-75f314cb6a5e] Running
	I1202 20:56:31.413146  761851 system_pods.go:89] "kube-proxy-854r8" [6c9652b0-217c-466f-9345-7364f0e39936] Running
	I1202 20:56:31.413151  761851 system_pods.go:89] "kube-scheduler-embed-certs-386191" [abe313cb-061b-4d65-b0e0-f369aacdc1c4] Running
	I1202 20:56:31.413158  761851 system_pods.go:89] "storage-provisioner" [d37e55bb-bb1f-4659-a9c5-14d47011bd23] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:56:31.413178  761851 retry.go:31] will retry after 524.282357ms: missing components: kube-dns
	I1202 20:56:31.942153  761851 system_pods.go:86] 8 kube-system pods found
	I1202 20:56:31.942180  761851 system_pods.go:89] "coredns-66bc5c9577-q6l9x" [e7159eb1-3cde-437a-99e3-760c9c397977] Running
	I1202 20:56:31.942185  761851 system_pods.go:89] "etcd-embed-certs-386191" [5ca26226-7c69-4d6b-a513-2187c089d96c] Running
	I1202 20:56:31.942189  761851 system_pods.go:89] "kindnet-x9jsh" [410369de-877d-46e5-8f7c-cd8076d1d2f5] Running
	I1202 20:56:31.942192  761851 system_pods.go:89] "kube-apiserver-embed-certs-386191" [dd2a96bf-6afe-43b8-b01d-a88496c7c9b9] Running
	I1202 20:56:31.942196  761851 system_pods.go:89] "kube-controller-manager-embed-certs-386191" [ba51ea44-285e-494e-a173-75f314cb6a5e] Running
	I1202 20:56:31.942199  761851 system_pods.go:89] "kube-proxy-854r8" [6c9652b0-217c-466f-9345-7364f0e39936] Running
	I1202 20:56:31.942202  761851 system_pods.go:89] "kube-scheduler-embed-certs-386191" [abe313cb-061b-4d65-b0e0-f369aacdc1c4] Running
	I1202 20:56:31.942205  761851 system_pods.go:89] "storage-provisioner" [d37e55bb-bb1f-4659-a9c5-14d47011bd23] Running
	I1202 20:56:31.942212  761851 system_pods.go:126] duration metric: took 1.593529924s to wait for k8s-apps to be running ...
	I1202 20:56:31.942219  761851 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 20:56:31.942261  761851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:56:31.955055  761851 system_svc.go:56] duration metric: took 12.827769ms WaitForService to wait for kubelet
	I1202 20:56:31.955097  761851 kubeadm.go:587] duration metric: took 14.423443169s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:56:31.955121  761851 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:56:31.958210  761851 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 20:56:31.958249  761851 node_conditions.go:123] node cpu capacity is 8
	I1202 20:56:31.958265  761851 node_conditions.go:105] duration metric: took 3.138976ms to run NodePressure ...
	I1202 20:56:31.958278  761851 start.go:242] waiting for startup goroutines ...
	I1202 20:56:31.958285  761851 start.go:247] waiting for cluster config update ...
	I1202 20:56:31.958296  761851 start.go:256] writing updated cluster config ...
	I1202 20:56:31.958597  761851 ssh_runner.go:195] Run: rm -f paused
	I1202 20:56:31.962581  761851 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:56:31.966130  761851 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q6l9x" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:31.971173  761851 pod_ready.go:94] pod "coredns-66bc5c9577-q6l9x" is "Ready"
	I1202 20:56:31.971201  761851 pod_ready.go:86] duration metric: took 5.04828ms for pod "coredns-66bc5c9577-q6l9x" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:31.973411  761851 pod_ready.go:83] waiting for pod "etcd-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:31.978228  761851 pod_ready.go:94] pod "etcd-embed-certs-386191" is "Ready"
	I1202 20:56:31.978263  761851 pod_ready.go:86] duration metric: took 4.826356ms for pod "etcd-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:31.980684  761851 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:31.984771  761851 pod_ready.go:94] pod "kube-apiserver-embed-certs-386191" is "Ready"
	I1202 20:56:31.984803  761851 pod_ready.go:86] duration metric: took 4.09504ms for pod "kube-apiserver-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:31.986878  761851 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.367606  761851 pod_ready.go:94] pod "kube-controller-manager-embed-certs-386191" is "Ready"
	I1202 20:56:32.367637  761851 pod_ready.go:86] duration metric: took 380.737416ms for pod "kube-controller-manager-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.567519  761851 pod_ready.go:83] waiting for pod "kube-proxy-854r8" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:32.967144  761851 pod_ready.go:94] pod "kube-proxy-854r8" is "Ready"
	I1202 20:56:32.967177  761851 pod_ready.go:86] duration metric: took 399.625971ms for pod "kube-proxy-854r8" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:33.168115  761851 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:33.566983  761851 pod_ready.go:94] pod "kube-scheduler-embed-certs-386191" is "Ready"
	I1202 20:56:33.567015  761851 pod_ready.go:86] duration metric: took 398.86856ms for pod "kube-scheduler-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:56:33.567030  761851 pod_ready.go:40] duration metric: took 1.604412945s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:56:33.625323  761851 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 20:56:33.627128  761851 out.go:179] * Done! kubectl is now configured to use "embed-certs-386191" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 20:56:16 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:16.720883621Z" level=info msg="Started container" PID=1755 containerID=8b9571fc1afb59ffda70959998b9386a8cc1a412c773117671bd059b0c151419 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vhp59/dashboard-metrics-scraper id=7068fee1-e2f8-4bed-b392-9e04e9b48792 name=/runtime.v1.RuntimeService/StartContainer sandboxID=82b8464121953a993bc43eb6fe67912f54b3283ad0ce74e3a1bd67f67c091d49
	Dec 02 20:56:16 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:16.860332035Z" level=info msg="Removing container: 200b961bc8b01d2d50a50e095ea2056aa5e2e23febb2edfacc81d4ddfb956fc0" id=c7642385-e74a-4a35-be4b-a35c75aad6a1 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 20:56:16 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:16.87428434Z" level=info msg="Removed container 200b961bc8b01d2d50a50e095ea2056aa5e2e23febb2edfacc81d4ddfb956fc0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vhp59/dashboard-metrics-scraper" id=c7642385-e74a-4a35-be4b-a35c75aad6a1 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 20:56:25 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:25.885227609Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=41ead202-ee56-4fee-b2c6-a899c09bc22c name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:56:25 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:25.886279227Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b0431df3-3ffe-44fc-b59e-ab034a5e82cb name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:56:25 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:25.887367804Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=c102ae40-1aee-41e3-a464-6dfdcd001b40 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:56:25 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:25.887527059Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:25 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:25.892606016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:25 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:25.892825228Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d561facd3bc5b213f169448a0b25db351a0e272a62053c61991d04124aa2333b/merged/etc/passwd: no such file or directory"
	Dec 02 20:56:25 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:25.892854415Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d561facd3bc5b213f169448a0b25db351a0e272a62053c61991d04124aa2333b/merged/etc/group: no such file or directory"
	Dec 02 20:56:25 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:25.893795321Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:25 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:25.930114881Z" level=info msg="Created container 35e720802a1bf3bbed62adc89a0f19dce7a67de2db637573eb1894ab9ebb8f24: kube-system/storage-provisioner/storage-provisioner" id=c102ae40-1aee-41e3-a464-6dfdcd001b40 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:56:25 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:25.930756886Z" level=info msg="Starting container: 35e720802a1bf3bbed62adc89a0f19dce7a67de2db637573eb1894ab9ebb8f24" id=edbb469e-ee1b-441b-9c79-1b0f4f4df2e7 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:56:25 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:25.93256201Z" level=info msg="Started container" PID=1772 containerID=35e720802a1bf3bbed62adc89a0f19dce7a67de2db637573eb1894ab9ebb8f24 description=kube-system/storage-provisioner/storage-provisioner id=edbb469e-ee1b-441b-9c79-1b0f4f4df2e7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=eba18f9d9797bf1e231fb0774d0cc55e6bc3bc97ed16f2daa02c5add6153e22d
	Dec 02 20:56:36 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:36.753882942Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=29d64113-d741-484e-ae44-a0f1e042da40 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:56:36 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:36.754897124Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=dfbd5b69-be85-44be-838e-6618a9d7728a name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:56:36 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:36.756127139Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vhp59/dashboard-metrics-scraper" id=70be0df9-c03b-45ff-be1c-f58e610a608d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:56:36 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:36.756265122Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:36 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:36.765090609Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:36 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:36.768588064Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:56:36 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:36.802729213Z" level=info msg="Created container c9080db2b6daf76ef63b2b59e74d0239edbb838d08547298dd4502c7c3b4d9f4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vhp59/dashboard-metrics-scraper" id=70be0df9-c03b-45ff-be1c-f58e610a608d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:56:36 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:36.803662467Z" level=info msg="Starting container: c9080db2b6daf76ef63b2b59e74d0239edbb838d08547298dd4502c7c3b4d9f4" id=c866f162-c4f5-41df-8d94-e865500f2435 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:56:36 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:36.806157915Z" level=info msg="Started container" PID=1809 containerID=c9080db2b6daf76ef63b2b59e74d0239edbb838d08547298dd4502c7c3b4d9f4 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vhp59/dashboard-metrics-scraper id=c866f162-c4f5-41df-8d94-e865500f2435 name=/runtime.v1.RuntimeService/StartContainer sandboxID=82b8464121953a993bc43eb6fe67912f54b3283ad0ce74e3a1bd67f67c091d49
	Dec 02 20:56:36 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:36.91806396Z" level=info msg="Removing container: 8b9571fc1afb59ffda70959998b9386a8cc1a412c773117671bd059b0c151419" id=b8378f1d-1888-4295-b94a-10e6938c2590 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 20:56:36 default-k8s-diff-port-997805 crio[565]: time="2025-12-02T20:56:36.928175406Z" level=info msg="Removed container 8b9571fc1afb59ffda70959998b9386a8cc1a412c773117671bd059b0c151419: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vhp59/dashboard-metrics-scraper" id=b8378f1d-1888-4295-b94a-10e6938c2590 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	c9080db2b6daf       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   3                   82b8464121953       dashboard-metrics-scraper-6ffb444bf9-vhp59             kubernetes-dashboard
	35e720802a1bf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   eba18f9d9797b       storage-provisioner                                    kube-system
	f7c1779df921d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago      Running             kubernetes-dashboard        0                   26e44926de7f4       kubernetes-dashboard-855c9754f9-jz8xk                  kubernetes-dashboard
	08d5150fce081       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   c44964ac4c8c3       busybox                                                default
	f06d54a2384df       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   0b1607b008992       kindnet-rzqpn                                          kube-system
	1e15bb4007b6f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   eba18f9d9797b       storage-provisioner                                    kube-system
	5ad0a1655ba23       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           55 seconds ago      Running             kube-proxy                  0                   f11c81c57060d       kube-proxy-s2jpn                                       kube-system
	fc477a72b7656       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   1745c8b86e040       coredns-66bc5c9577-jrln7                               kube-system
	25e14e8feafb6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           58 seconds ago      Running             etcd                        0                   97509908f5a98       etcd-default-k8s-diff-port-997805                      kube-system
	0c7e2844e2dbd       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           58 seconds ago      Running             kube-scheduler              0                   4d0207ec1741b       kube-scheduler-default-k8s-diff-port-997805            kube-system
	81b0ec87511a0       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           58 seconds ago      Running             kube-apiserver              0                   f5fdfcd5991e8       kube-apiserver-default-k8s-diff-port-997805            kube-system
	e13e6c4d6c5da       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           58 seconds ago      Running             kube-controller-manager     0                   13288f31fdebc       kube-controller-manager-default-k8s-diff-port-997805   kube-system
	
	
	==> coredns [fc477a72b765693b81689208ff42b491035d31c49ea6b43c64099d495e7cec00] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45135 - 2690 "HINFO IN 5587080186042255362.1680565545141175739. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026839424s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-997805
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-997805
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=default-k8s-diff-port-997805
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T20_54_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 20:54:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-997805
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:56:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:56:25 +0000   Tue, 02 Dec 2025 20:54:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:56:25 +0000   Tue, 02 Dec 2025 20:54:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:56:25 +0000   Tue, 02 Dec 2025 20:54:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:56:25 +0000   Tue, 02 Dec 2025 20:55:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-997805
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                4d0fe763-c364-4b9d-a9b2-5ea428409eed
	  Boot ID:                    9dd5456f-e394-4b4b-9458-48d26faf507e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-jrln7                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-default-k8s-diff-port-997805                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-rzqpn                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-default-k8s-diff-port-997805             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-997805    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-s2jpn                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-default-k8s-diff-port-997805             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vhp59              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-jz8xk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node default-k8s-diff-port-997805 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node default-k8s-diff-port-997805 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s               kubelet          Node default-k8s-diff-port-997805 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node default-k8s-diff-port-997805 event: Registered Node default-k8s-diff-port-997805 in Controller
	  Normal  NodeReady                98s                kubelet          Node default-k8s-diff-port-997805 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node default-k8s-diff-port-997805 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node default-k8s-diff-port-997805 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node default-k8s-diff-port-997805 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node default-k8s-diff-port-997805 event: Registered Node default-k8s-diff-port-997805 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[ +32.254238] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[Dec 2 20:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 03 bd 14 45 8a 08 06
	[  +0.000590] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 96 27 ad 0d 40 04 08 06
	[Dec 2 20:53] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	[  +0.000700] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 e4 ba c0 78 5f 08 06
	[ +10.119645] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000022] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[  +2.447166] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 df 09 53 d6 6e 08 06
	[  +0.000374] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 8d 06 71 0a 5e 08 06
	[Dec 2 20:54] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 12 47 13 50 f6 bc 08 06
	[  +0.001523] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[ +22.123549] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 0d 45 06 42 2a 08 06
	[  +0.000389] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	
	
	==> etcd [25e14e8feafb6c0d6c5261cd5e507b812e39fcb9c7e196408fe69d780ebbcd1d] <==
	{"level":"info","ts":"2025-12-02T20:55:55.078604Z","caller":"traceutil/trace.go:172","msg":"trace[1838275369] transaction","detail":"{read_only:false; response_revision:448; number_of_response:1; }","duration":"197.582016ms","start":"2025-12-02T20:55:54.880993Z","end":"2025-12-02T20:55:55.078575Z","steps":["trace[1838275369] 'process raft request'  (duration: 197.356077ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T20:55:55.307371Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"185.008178ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-997805\" limit:1 ","response":"range_response_count:1 size:7752"}
	{"level":"info","ts":"2025-12-02T20:55:55.307454Z","caller":"traceutil/trace.go:172","msg":"trace[2066049479] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-997805; range_end:; response_count:1; response_revision:449; }","duration":"185.091265ms","start":"2025-12-02T20:55:55.122337Z","end":"2025-12-02T20:55:55.307428Z","steps":["trace[2066049479] 'agreement among raft nodes before linearized reading'  (duration: 73.80089ms)","trace[2066049479] 'range keys from in-memory index tree'  (duration: 111.086328ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T20:55:55.307396Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.178158ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597461077860260 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/kubernetes-dashboard\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/kubernetes-dashboard\" value_size:1249 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-02T20:55:55.307644Z","caller":"traceutil/trace.go:172","msg":"trace[2033977389] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"223.353051ms","start":"2025-12-02T20:55:55.084277Z","end":"2025-12-02T20:55:55.307630Z","steps":["trace[2033977389] 'process raft request'  (duration: 111.897184ms)","trace[2033977389] 'compare'  (duration: 111.043792ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T20:55:55.307674Z","caller":"traceutil/trace.go:172","msg":"trace[1365941716] linearizableReadLoop","detail":"{readStateIndex:479; appliedIndex:477; }","duration":"111.546747ms","start":"2025-12-02T20:55:55.196114Z","end":"2025-12-02T20:55:55.307660Z","steps":["trace[1365941716] 'read index received'  (duration: 65.363µs)","trace[1365941716] 'applied index is now lower than readState.Index'  (duration: 111.480782ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T20:55:55.307688Z","caller":"traceutil/trace.go:172","msg":"trace[1298224716] transaction","detail":"{read_only:false; response_revision:451; number_of_response:1; }","duration":"218.987752ms","start":"2025-12-02T20:55:55.088688Z","end":"2025-12-02T20:55:55.307676Z","steps":["trace[1298224716] 'process raft request'  (duration: 218.87919ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T20:55:55.307778Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.496509ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T20:55:55.307930Z","caller":"traceutil/trace.go:172","msg":"trace[1191525244] range","detail":"{range_begin:/registry/clusterroles; range_end:; response_count:0; response_revision:451; }","duration":"119.651279ms","start":"2025-12-02T20:55:55.188267Z","end":"2025-12-02T20:55:55.307918Z","steps":["trace[1191525244] 'agreement among raft nodes before linearized reading'  (duration: 119.474214ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T20:55:55.307811Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.39798ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" limit:1 ","response":"range_response_count:1 size:442"}
	{"level":"info","ts":"2025-12-02T20:55:55.308006Z","caller":"traceutil/trace.go:172","msg":"trace[1997034449] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:1; response_revision:451; }","duration":"119.59132ms","start":"2025-12-02T20:55:55.188402Z","end":"2025-12-02T20:55:55.307993Z","steps":["trace[1997034449] 'agreement among raft nodes before linearized reading'  (duration: 119.316734ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T20:55:55.450852Z","caller":"traceutil/trace.go:172","msg":"trace[1252777728] linearizableReadLoop","detail":"{readStateIndex:480; appliedIndex:480; }","duration":"121.966061ms","start":"2025-12-02T20:55:55.328862Z","end":"2025-12-02T20:55:55.450828Z","steps":["trace[1252777728] 'read index received'  (duration: 121.935676ms)","trace[1252777728] 'applied index is now lower than readState.Index'  (duration: 6.33µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T20:55:55.511662Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.77594ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T20:55:55.511732Z","caller":"traceutil/trace.go:172","msg":"trace[1826643080] range","detail":"{range_begin:/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper; range_end:; response_count:0; response_revision:452; }","duration":"182.859148ms","start":"2025-12-02T20:55:55.328857Z","end":"2025-12-02T20:55:55.511716Z","steps":["trace[1826643080] 'agreement among raft nodes before linearized reading'  (duration: 122.076349ms)","trace[1826643080] 'range keys from in-memory index tree'  (duration: 60.663926ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T20:55:55.511875Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.973607ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-view\" limit:1 ","response":"range_response_count:1 size:2030"}
	{"level":"info","ts":"2025-12-02T20:55:55.511921Z","caller":"traceutil/trace.go:172","msg":"trace[1019459205] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-view; range_end:; response_count:1; response_revision:453; }","duration":"183.028131ms","start":"2025-12-02T20:55:55.328881Z","end":"2025-12-02T20:55:55.511909Z","steps":["trace[1019459205] 'agreement among raft nodes before linearized reading'  (duration: 182.887632ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T20:55:55.512085Z","caller":"traceutil/trace.go:172","msg":"trace[9690257] transaction","detail":"{read_only:false; response_revision:453; number_of_response:1; }","duration":"193.069895ms","start":"2025-12-02T20:55:55.318990Z","end":"2025-12-02T20:55:55.512060Z","steps":["trace[9690257] 'process raft request'  (duration: 131.915422ms)","trace[9690257] 'compare'  (duration: 60.739957ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T20:55:55.512203Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.792491ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" limit:1 ","response":"range_response_count:1 size:4336"}
	{"level":"info","ts":"2025-12-02T20:55:55.512238Z","caller":"traceutil/trace.go:172","msg":"trace[851423930] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:453; }","duration":"109.830953ms","start":"2025-12-02T20:55:55.402400Z","end":"2025-12-02T20:55:55.512231Z","steps":["trace[851423930] 'agreement among raft nodes before linearized reading'  (duration: 109.699387ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T20:55:55.849927Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.060314ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox\" limit:1 ","response":"range_response_count:1 size:2812"}
	{"level":"warn","ts":"2025-12-02T20:55:55.849988Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.36545ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-view\" limit:1 ","response":"range_response_count:1 size:2030"}
	{"level":"info","ts":"2025-12-02T20:55:55.850042Z","caller":"traceutil/trace.go:172","msg":"trace[1668444583] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-view; range_end:; response_count:1; response_revision:459; }","duration":"119.415386ms","start":"2025-12-02T20:55:55.730608Z","end":"2025-12-02T20:55:55.850023Z","steps":["trace[1668444583] 'range keys from in-memory index tree'  (duration: 119.238252ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T20:55:55.850042Z","caller":"traceutil/trace.go:172","msg":"trace[171357317] range","detail":"{range_begin:/registry/pods/default/busybox; range_end:; response_count:1; response_revision:459; }","duration":"117.174475ms","start":"2025-12-02T20:55:55.732836Z","end":"2025-12-02T20:55:55.850010Z","steps":["trace[171357317] 'range keys from in-memory index tree'  (duration: 116.812214ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T20:55:55.849949Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.796405ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:897"}
	{"level":"info","ts":"2025-12-02T20:55:55.850286Z","caller":"traceutil/trace.go:172","msg":"trace[830423202] range","detail":"{range_begin:/registry/namespaces/kubernetes-dashboard; range_end:; response_count:1; response_revision:459; }","duration":"120.140931ms","start":"2025-12-02T20:55:55.730118Z","end":"2025-12-02T20:55:55.850259Z","steps":["trace[830423202] 'range keys from in-memory index tree'  (duration: 119.657189ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:56:50 up  2:39,  0 user,  load average: 3.60, 3.97, 2.71
	Linux default-k8s-diff-port-997805 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f06d54a2384df567756e9be0cfb30d79b223d7ca905c4709c051828f8e793c87] <==
	I1202 20:55:56.148352       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 20:55:56.148615       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1202 20:55:56.148837       1 main.go:148] setting mtu 1500 for CNI 
	I1202 20:55:56.148859       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 20:55:56.148881       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T20:55:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 20:55:56.445432       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 20:55:56.445471       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 20:55:56.445642       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 20:55:56.445704       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 20:55:56.846508       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 20:55:56.846539       1 metrics.go:72] Registering metrics
	I1202 20:55:56.846593       1 controller.go:711] "Syncing nftables rules"
	I1202 20:56:06.364682       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 20:56:06.364786       1 main.go:301] handling current node
	I1202 20:56:16.370473       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 20:56:16.370520       1 main.go:301] handling current node
	I1202 20:56:26.364799       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 20:56:26.364839       1 main.go:301] handling current node
	I1202 20:56:36.368209       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 20:56:36.368261       1 main.go:301] handling current node
	I1202 20:56:46.372795       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 20:56:46.372831       1 main.go:301] handling current node
	
	
	==> kube-apiserver [81b0ec87511a05a7501d98eb27c52f69372a4b30c4ea523db262c140f9b68cd3] <==
	I1202 20:55:54.282858       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1202 20:55:54.282924       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1202 20:55:54.283623       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1202 20:55:54.283720       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1202 20:55:54.283672       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1202 20:55:54.290235       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1202 20:55:54.292022       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1202 20:55:54.300590       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1202 20:55:54.306353       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1202 20:55:54.306425       1 policy_source.go:240] refreshing policies
	I1202 20:55:54.317993       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1202 20:55:54.699437       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1202 20:55:54.854730       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 20:55:54.858526       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 20:55:55.311202       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 20:55:55.524913       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 20:55:55.641060       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 20:55:55.853433       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 20:55:55.937240       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.24.230"}
	I1202 20:55:55.949300       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.154.227"}
	I1202 20:55:57.733382       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 20:55:57.733430       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 20:55:57.935991       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 20:55:58.233356       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 20:55:58.233356       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [e13e6c4d6c5da602ac2e1402a7612205c5a0ceffdccf7618da3035e562a7d9d3] <==
	I1202 20:55:57.609246       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1202 20:55:57.609380       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-997805"
	I1202 20:55:57.609442       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1202 20:55:57.623803       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1202 20:55:57.627300       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1202 20:55:57.629239       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1202 20:55:57.629244       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 20:55:57.630500       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 20:55:57.630567       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 20:55:57.630577       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1202 20:55:57.630583       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 20:55:57.630592       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 20:55:57.630569       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1202 20:55:57.630685       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1202 20:55:57.631283       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1202 20:55:57.631722       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1202 20:55:57.631754       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1202 20:55:57.634299       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1202 20:55:57.634483       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1202 20:55:57.636346       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 20:55:57.637436       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1202 20:55:57.637476       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1202 20:55:57.639755       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1202 20:55:57.640943       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1202 20:55:57.660495       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [5ad0a1655ba23d5613d29f48e14efa7b904937342c2b4f154af87389ad6ae5a9] <==
	I1202 20:55:55.693952       1 server_linux.go:53] "Using iptables proxy"
	I1202 20:55:55.760974       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 20:55:55.861813       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 20:55:55.861858       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1202 20:55:55.861980       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 20:55:55.892575       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 20:55:55.892644       1 server_linux.go:132] "Using iptables Proxier"
	I1202 20:55:55.901965       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 20:55:55.902499       1 server.go:527] "Version info" version="v1.34.2"
	I1202 20:55:55.902791       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:55:55.907109       1 config.go:200] "Starting service config controller"
	I1202 20:55:55.907258       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 20:55:55.907470       1 config.go:106] "Starting endpoint slice config controller"
	I1202 20:55:55.907527       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 20:55:55.907679       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 20:55:55.907715       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 20:55:55.909145       1 config.go:309] "Starting node config controller"
	I1202 20:55:55.909200       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 20:55:55.909210       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 20:55:56.008163       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 20:55:56.008192       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 20:55:56.008198       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [0c7e2844e2dbdbf5b9ffe8bf4e8d07304b64b059e3d4c965c2010c5d8a39c499] <==
	I1202 20:55:52.891386       1 serving.go:386] Generated self-signed cert in-memory
	W1202 20:55:54.215137       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 20:55:54.215174       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W1202 20:55:54.215189       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 20:55:54.215198       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 20:55:54.236291       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1202 20:55:54.236318       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:55:54.238876       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:55:54.238913       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:55:54.239292       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 20:55:54.239755       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 20:55:54.340641       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 20:55:58 default-k8s-diff-port-997805 kubelet[726]: I1202 20:55:58.144611     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2mfc\" (UniqueName: \"kubernetes.io/projected/cbfcab3a-34f4-49e3-b330-2077b65e6a48-kube-api-access-c2mfc\") pod \"kubernetes-dashboard-855c9754f9-jz8xk\" (UID: \"cbfcab3a-34f4-49e3-b330-2077b65e6a48\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jz8xk"
	Dec 02 20:55:58 default-k8s-diff-port-997805 kubelet[726]: I1202 20:55:58.144712     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhxp8\" (UniqueName: \"kubernetes.io/projected/cc5c7477-3af9-4955-a7b0-94a907898050-kube-api-access-bhxp8\") pod \"dashboard-metrics-scraper-6ffb444bf9-vhp59\" (UID: \"cc5c7477-3af9-4955-a7b0-94a907898050\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vhp59"
	Dec 02 20:56:01 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:01.654511     726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 02 20:56:04 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:04.490246     726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jz8xk" podStartSLOduration=2.356286059 podStartE2EDuration="6.490223678s" podCreationTimestamp="2025-12-02 20:55:58 +0000 UTC" firstStartedPulling="2025-12-02 20:55:58.399480607 +0000 UTC m=+6.778320079" lastFinishedPulling="2025-12-02 20:56:02.533418221 +0000 UTC m=+10.912257698" observedRunningTime="2025-12-02 20:56:02.865871705 +0000 UTC m=+11.244711195" watchObservedRunningTime="2025-12-02 20:56:04.490223678 +0000 UTC m=+12.869063168"
	Dec 02 20:56:05 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:05.822624     726 scope.go:117] "RemoveContainer" containerID="51f3f00f170a758498b59c1187991a56865c24b44f57f6cfc0c511400ad68660"
	Dec 02 20:56:06 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:06.826958     726 scope.go:117] "RemoveContainer" containerID="51f3f00f170a758498b59c1187991a56865c24b44f57f6cfc0c511400ad68660"
	Dec 02 20:56:06 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:06.827294     726 scope.go:117] "RemoveContainer" containerID="200b961bc8b01d2d50a50e095ea2056aa5e2e23febb2edfacc81d4ddfb956fc0"
	Dec 02 20:56:06 default-k8s-diff-port-997805 kubelet[726]: E1202 20:56:06.827471     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vhp59_kubernetes-dashboard(cc5c7477-3af9-4955-a7b0-94a907898050)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vhp59" podUID="cc5c7477-3af9-4955-a7b0-94a907898050"
	Dec 02 20:56:07 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:07.832199     726 scope.go:117] "RemoveContainer" containerID="200b961bc8b01d2d50a50e095ea2056aa5e2e23febb2edfacc81d4ddfb956fc0"
	Dec 02 20:56:07 default-k8s-diff-port-997805 kubelet[726]: E1202 20:56:07.832454     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vhp59_kubernetes-dashboard(cc5c7477-3af9-4955-a7b0-94a907898050)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vhp59" podUID="cc5c7477-3af9-4955-a7b0-94a907898050"
	Dec 02 20:56:16 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:16.673192     726 scope.go:117] "RemoveContainer" containerID="200b961bc8b01d2d50a50e095ea2056aa5e2e23febb2edfacc81d4ddfb956fc0"
	Dec 02 20:56:16 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:16.858777     726 scope.go:117] "RemoveContainer" containerID="200b961bc8b01d2d50a50e095ea2056aa5e2e23febb2edfacc81d4ddfb956fc0"
	Dec 02 20:56:16 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:16.859092     726 scope.go:117] "RemoveContainer" containerID="8b9571fc1afb59ffda70959998b9386a8cc1a412c773117671bd059b0c151419"
	Dec 02 20:56:16 default-k8s-diff-port-997805 kubelet[726]: E1202 20:56:16.859336     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vhp59_kubernetes-dashboard(cc5c7477-3af9-4955-a7b0-94a907898050)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vhp59" podUID="cc5c7477-3af9-4955-a7b0-94a907898050"
	Dec 02 20:56:25 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:25.884733     726 scope.go:117] "RemoveContainer" containerID="1e15bb4007b6f6ac5c5aba376e81233c28da69653a99ea88226c07cfeee8a9a7"
	Dec 02 20:56:26 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:26.672950     726 scope.go:117] "RemoveContainer" containerID="8b9571fc1afb59ffda70959998b9386a8cc1a412c773117671bd059b0c151419"
	Dec 02 20:56:26 default-k8s-diff-port-997805 kubelet[726]: E1202 20:56:26.673277     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vhp59_kubernetes-dashboard(cc5c7477-3af9-4955-a7b0-94a907898050)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vhp59" podUID="cc5c7477-3af9-4955-a7b0-94a907898050"
	Dec 02 20:56:36 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:36.752920     726 scope.go:117] "RemoveContainer" containerID="8b9571fc1afb59ffda70959998b9386a8cc1a412c773117671bd059b0c151419"
	Dec 02 20:56:36 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:36.916527     726 scope.go:117] "RemoveContainer" containerID="8b9571fc1afb59ffda70959998b9386a8cc1a412c773117671bd059b0c151419"
	Dec 02 20:56:36 default-k8s-diff-port-997805 kubelet[726]: I1202 20:56:36.916749     726 scope.go:117] "RemoveContainer" containerID="c9080db2b6daf76ef63b2b59e74d0239edbb838d08547298dd4502c7c3b4d9f4"
	Dec 02 20:56:36 default-k8s-diff-port-997805 kubelet[726]: E1202 20:56:36.917061     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vhp59_kubernetes-dashboard(cc5c7477-3af9-4955-a7b0-94a907898050)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vhp59" podUID="cc5c7477-3af9-4955-a7b0-94a907898050"
	Dec 02 20:56:45 default-k8s-diff-port-997805 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 20:56:45 default-k8s-diff-port-997805 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 20:56:45 default-k8s-diff-port-997805 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 20:56:45 default-k8s-diff-port-997805 systemd[1]: kubelet.service: Consumed 1.945s CPU time.
	
	
	==> kubernetes-dashboard [f7c1779df921dc77252b05de7b4552d502a7c9e38f020d197cbdfd6540d6213a] <==
	2025/12/02 20:56:02 Using namespace: kubernetes-dashboard
	2025/12/02 20:56:02 Using in-cluster config to connect to apiserver
	2025/12/02 20:56:02 Using secret token for csrf signing
	2025/12/02 20:56:02 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/02 20:56:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/02 20:56:02 Successful initial request to the apiserver, version: v1.34.2
	2025/12/02 20:56:02 Generating JWE encryption key
	2025/12/02 20:56:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/02 20:56:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/02 20:56:02 Initializing JWE encryption key from synchronized object
	2025/12/02 20:56:02 Creating in-cluster Sidecar client
	2025/12/02 20:56:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 20:56:02 Serving insecurely on HTTP port: 9090
	2025/12/02 20:56:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 20:56:02 Starting overwatch
	
	
	==> storage-provisioner [1e15bb4007b6f6ac5c5aba376e81233c28da69653a99ea88226c07cfeee8a9a7] <==
	I1202 20:55:55.745333       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1202 20:56:25.748002       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [35e720802a1bf3bbed62adc89a0f19dce7a67de2db637573eb1894ab9ebb8f24] <==
	I1202 20:56:25.945665       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 20:56:25.953372       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 20:56:25.953416       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1202 20:56:25.955574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:29.411827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:33.672384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:37.271276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:40.325457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:43.348084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:43.354422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 20:56:43.354637       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 20:56:43.354820       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-997805_6cedf7ef-97cd-4056-9921-7b63b41ee2ed!
	I1202 20:56:43.354776       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"630e2ab7-c763-4f65-86eb-788c49314bcc", APIVersion:"v1", ResourceVersion:"638", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-997805_6cedf7ef-97cd-4056-9921-7b63b41ee2ed became leader
	W1202 20:56:43.357786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:43.361997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 20:56:43.455877       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-997805_6cedf7ef-97cd-4056-9921-7b63b41ee2ed!
	W1202 20:56:45.365991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:45.371461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:47.375843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:47.382502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:49.386391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:56:49.391308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-997805 -n default-k8s-diff-port-997805
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-997805 -n default-k8s-diff-port-997805: exit status 2 (338.075867ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-997805 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-386191 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-386191 --alsologtostderr -v=1: exit status 80 (2.286576681s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-386191 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 20:58:02.464251  776962 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:58:02.464553  776962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:58:02.464566  776962 out.go:374] Setting ErrFile to fd 2...
	I1202 20:58:02.464571  776962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:58:02.464778  776962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:58:02.465030  776962 out.go:368] Setting JSON to false
	I1202 20:58:02.465051  776962 mustload.go:66] Loading cluster: embed-certs-386191
	I1202 20:58:02.465442  776962 config.go:182] Loaded profile config "embed-certs-386191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:58:02.465844  776962 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:58:02.484868  776962 host.go:66] Checking if "embed-certs-386191" exists ...
	I1202 20:58:02.485226  776962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:58:02.545111  776962 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-02 20:58:02.534530932 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:58:02.545734  776962 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-386191 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1202 20:58:02.547929  776962 out.go:179] * Pausing node embed-certs-386191 ... 
	I1202 20:58:02.549422  776962 host.go:66] Checking if "embed-certs-386191" exists ...
	I1202 20:58:02.549738  776962 ssh_runner.go:195] Run: systemctl --version
	I1202 20:58:02.549781  776962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:58:02.569226  776962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:58:02.669650  776962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:58:02.684016  776962 pause.go:52] kubelet running: true
	I1202 20:58:02.684112  776962 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 20:58:02.845218  776962 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 20:58:02.845335  776962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 20:58:02.916308  776962 cri.go:89] found id: "fbbb7718121bf867e159d5cd1a6bf1edd51a7b976076819722430cf4282dc5dd"
	I1202 20:58:02.916337  776962 cri.go:89] found id: "fe0d3771d27b3c21233deb323722886a95d260e7637dc80599a483422f200d04"
	I1202 20:58:02.916344  776962 cri.go:89] found id: "8c1d49372b9e24a9b37e0b6123939de40a38bc39fb2d3b737f65fd8154b00adb"
	I1202 20:58:02.916351  776962 cri.go:89] found id: "3e2d26d4dcdd30ce3fe9e663bdd5abd2a899569ad144e98d6aec4179569df0cf"
	I1202 20:58:02.916356  776962 cri.go:89] found id: "b5ab2cecc26850a5fcfdda9460fabe3dfee322129d5a7ffa87daa1e4390a54cb"
	I1202 20:58:02.916361  776962 cri.go:89] found id: "977a5d34d10349633d8b109d327cf440d676aa5501596ec9742db0005680b6ea"
	I1202 20:58:02.916365  776962 cri.go:89] found id: "7bbd6132314dd50edb345c367cfd40b9555ce01487136490278226bf20c9869c"
	I1202 20:58:02.916369  776962 cri.go:89] found id: "bb42ebc0538d2d4002108a87aba40e3d0ac601e9d3e24c09df1bd4436d20d164"
	I1202 20:58:02.916372  776962 cri.go:89] found id: "2d91f220a3e5c81f5d5d8cdae53244fd20a45b32d3e9cef96c94d22f621da68c"
	I1202 20:58:02.916388  776962 cri.go:89] found id: "8a430412e5cdd121d367b7b4d53b1fa49127fabd0127bc78bee44ec9f14c657b"
	I1202 20:58:02.916393  776962 cri.go:89] found id: "3ca826e1199be159f228fc829ee2aa57f744353729960f312b4007dab7811bd8"
	I1202 20:58:02.916398  776962 cri.go:89] found id: ""
	I1202 20:58:02.916446  776962 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 20:58:02.929163  776962 retry.go:31] will retry after 170.501927ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:58:02Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:58:03.100677  776962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:58:03.114113  776962 pause.go:52] kubelet running: false
	I1202 20:58:03.114183  776962 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 20:58:03.253807  776962 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 20:58:03.253889  776962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 20:58:03.322255  776962 cri.go:89] found id: "fbbb7718121bf867e159d5cd1a6bf1edd51a7b976076819722430cf4282dc5dd"
	I1202 20:58:03.322283  776962 cri.go:89] found id: "fe0d3771d27b3c21233deb323722886a95d260e7637dc80599a483422f200d04"
	I1202 20:58:03.322290  776962 cri.go:89] found id: "8c1d49372b9e24a9b37e0b6123939de40a38bc39fb2d3b737f65fd8154b00adb"
	I1202 20:58:03.322295  776962 cri.go:89] found id: "3e2d26d4dcdd30ce3fe9e663bdd5abd2a899569ad144e98d6aec4179569df0cf"
	I1202 20:58:03.322299  776962 cri.go:89] found id: "b5ab2cecc26850a5fcfdda9460fabe3dfee322129d5a7ffa87daa1e4390a54cb"
	I1202 20:58:03.322304  776962 cri.go:89] found id: "977a5d34d10349633d8b109d327cf440d676aa5501596ec9742db0005680b6ea"
	I1202 20:58:03.322308  776962 cri.go:89] found id: "7bbd6132314dd50edb345c367cfd40b9555ce01487136490278226bf20c9869c"
	I1202 20:58:03.322327  776962 cri.go:89] found id: "bb42ebc0538d2d4002108a87aba40e3d0ac601e9d3e24c09df1bd4436d20d164"
	I1202 20:58:03.322332  776962 cri.go:89] found id: "2d91f220a3e5c81f5d5d8cdae53244fd20a45b32d3e9cef96c94d22f621da68c"
	I1202 20:58:03.322341  776962 cri.go:89] found id: "8a430412e5cdd121d367b7b4d53b1fa49127fabd0127bc78bee44ec9f14c657b"
	I1202 20:58:03.322345  776962 cri.go:89] found id: "3ca826e1199be159f228fc829ee2aa57f744353729960f312b4007dab7811bd8"
	I1202 20:58:03.322350  776962 cri.go:89] found id: ""
	I1202 20:58:03.322401  776962 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 20:58:03.334197  776962 retry.go:31] will retry after 314.089079ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:58:03Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:58:03.648711  776962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:58:03.662040  776962 pause.go:52] kubelet running: false
	I1202 20:58:03.662128  776962 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 20:58:03.801726  776962 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 20:58:03.801836  776962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 20:58:03.872150  776962 cri.go:89] found id: "fbbb7718121bf867e159d5cd1a6bf1edd51a7b976076819722430cf4282dc5dd"
	I1202 20:58:03.872183  776962 cri.go:89] found id: "fe0d3771d27b3c21233deb323722886a95d260e7637dc80599a483422f200d04"
	I1202 20:58:03.872190  776962 cri.go:89] found id: "8c1d49372b9e24a9b37e0b6123939de40a38bc39fb2d3b737f65fd8154b00adb"
	I1202 20:58:03.872194  776962 cri.go:89] found id: "3e2d26d4dcdd30ce3fe9e663bdd5abd2a899569ad144e98d6aec4179569df0cf"
	I1202 20:58:03.872198  776962 cri.go:89] found id: "b5ab2cecc26850a5fcfdda9460fabe3dfee322129d5a7ffa87daa1e4390a54cb"
	I1202 20:58:03.872207  776962 cri.go:89] found id: "977a5d34d10349633d8b109d327cf440d676aa5501596ec9742db0005680b6ea"
	I1202 20:58:03.872211  776962 cri.go:89] found id: "7bbd6132314dd50edb345c367cfd40b9555ce01487136490278226bf20c9869c"
	I1202 20:58:03.872216  776962 cri.go:89] found id: "bb42ebc0538d2d4002108a87aba40e3d0ac601e9d3e24c09df1bd4436d20d164"
	I1202 20:58:03.872220  776962 cri.go:89] found id: "2d91f220a3e5c81f5d5d8cdae53244fd20a45b32d3e9cef96c94d22f621da68c"
	I1202 20:58:03.872234  776962 cri.go:89] found id: "8a430412e5cdd121d367b7b4d53b1fa49127fabd0127bc78bee44ec9f14c657b"
	I1202 20:58:03.872243  776962 cri.go:89] found id: "3ca826e1199be159f228fc829ee2aa57f744353729960f312b4007dab7811bd8"
	I1202 20:58:03.872249  776962 cri.go:89] found id: ""
	I1202 20:58:03.872295  776962 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 20:58:03.884455  776962 retry.go:31] will retry after 542.806119ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:58:03Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:58:04.428352  776962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:58:04.454338  776962 pause.go:52] kubelet running: false
	I1202 20:58:04.454433  776962 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 20:58:04.590037  776962 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 20:58:04.590139  776962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 20:58:04.656363  776962 cri.go:89] found id: "fbbb7718121bf867e159d5cd1a6bf1edd51a7b976076819722430cf4282dc5dd"
	I1202 20:58:04.656388  776962 cri.go:89] found id: "fe0d3771d27b3c21233deb323722886a95d260e7637dc80599a483422f200d04"
	I1202 20:58:04.656392  776962 cri.go:89] found id: "8c1d49372b9e24a9b37e0b6123939de40a38bc39fb2d3b737f65fd8154b00adb"
	I1202 20:58:04.656395  776962 cri.go:89] found id: "3e2d26d4dcdd30ce3fe9e663bdd5abd2a899569ad144e98d6aec4179569df0cf"
	I1202 20:58:04.656398  776962 cri.go:89] found id: "b5ab2cecc26850a5fcfdda9460fabe3dfee322129d5a7ffa87daa1e4390a54cb"
	I1202 20:58:04.656402  776962 cri.go:89] found id: "977a5d34d10349633d8b109d327cf440d676aa5501596ec9742db0005680b6ea"
	I1202 20:58:04.656405  776962 cri.go:89] found id: "7bbd6132314dd50edb345c367cfd40b9555ce01487136490278226bf20c9869c"
	I1202 20:58:04.656408  776962 cri.go:89] found id: "bb42ebc0538d2d4002108a87aba40e3d0ac601e9d3e24c09df1bd4436d20d164"
	I1202 20:58:04.656411  776962 cri.go:89] found id: "2d91f220a3e5c81f5d5d8cdae53244fd20a45b32d3e9cef96c94d22f621da68c"
	I1202 20:58:04.656429  776962 cri.go:89] found id: "8a430412e5cdd121d367b7b4d53b1fa49127fabd0127bc78bee44ec9f14c657b"
	I1202 20:58:04.656432  776962 cri.go:89] found id: "3ca826e1199be159f228fc829ee2aa57f744353729960f312b4007dab7811bd8"
	I1202 20:58:04.656440  776962 cri.go:89] found id: ""
	I1202 20:58:04.656487  776962 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 20:58:04.671713  776962 out.go:203] 
	W1202 20:58:04.673101  776962 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:58:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:58:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 20:58:04.673129  776962 out.go:285] * 
	* 
	W1202 20:58:04.678304  776962 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 20:58:04.679788  776962 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-386191 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-386191
helpers_test.go:243: (dbg) docker inspect embed-certs-386191:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "59d0941ced13c8c23df4b796c1700e71da3dff2dc9f33015ee98bb7f9e84e6c9",
	        "Created": "2025-12-02T20:55:55.991908115Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 774433,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T20:57:03.809622421Z",
	            "FinishedAt": "2025-12-02T20:57:02.921670957Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/59d0941ced13c8c23df4b796c1700e71da3dff2dc9f33015ee98bb7f9e84e6c9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/59d0941ced13c8c23df4b796c1700e71da3dff2dc9f33015ee98bb7f9e84e6c9/hostname",
	        "HostsPath": "/var/lib/docker/containers/59d0941ced13c8c23df4b796c1700e71da3dff2dc9f33015ee98bb7f9e84e6c9/hosts",
	        "LogPath": "/var/lib/docker/containers/59d0941ced13c8c23df4b796c1700e71da3dff2dc9f33015ee98bb7f9e84e6c9/59d0941ced13c8c23df4b796c1700e71da3dff2dc9f33015ee98bb7f9e84e6c9-json.log",
	        "Name": "/embed-certs-386191",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-386191:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-386191",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "59d0941ced13c8c23df4b796c1700e71da3dff2dc9f33015ee98bb7f9e84e6c9",
	                "LowerDir": "/var/lib/docker/overlay2/cd263fb850dea457d23961af62640291018121fa574740d96ea92fe99c9aa05c-init/diff:/var/lib/docker/overlay2/49bb44e987885c48600d5ae7ad3c81cad82a00f38070c0460882c5746d9fae59/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd263fb850dea457d23961af62640291018121fa574740d96ea92fe99c9aa05c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd263fb850dea457d23961af62640291018121fa574740d96ea92fe99c9aa05c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd263fb850dea457d23961af62640291018121fa574740d96ea92fe99c9aa05c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-386191",
	                "Source": "/var/lib/docker/volumes/embed-certs-386191/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-386191",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-386191",
	                "name.minikube.sigs.k8s.io": "embed-certs-386191",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d9e4366bed059f165927933f953c2fe4b31778e9683f03f5c4d7f707f537913c",
	            "SandboxKey": "/var/run/docker/netns/d9e4366bed05",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33518"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33519"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33522"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33520"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33521"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-386191": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "09e54ca661ff7c94761e454bfdeba97f3291ada6df7679173f7c9249a52d8235",
	                    "EndpointID": "bb8c77e970185dc84d4c692db55219019e72ca1ab30bf78c163dd98c15856dd2",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "62:16:ac:40:8a:20",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-386191",
	                        "59d0941ced13"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-386191 -n embed-certs-386191
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-386191 -n embed-certs-386191: exit status 2 (336.471776ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-386191 logs -n 25
E1202 20:58:06.136876  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/kindnet-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-386191 logs -n 25: (1.151802988s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ -p newest-cni-245604 --alsologtostderr -v=1                                                                                                                              │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-997805 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p default-k8s-diff-port-997805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2 │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:56 UTC │
	│ delete  │ -p newest-cni-245604                                                                                                                                                     │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ delete  │ -p newest-cni-245604                                                                                                                                                     │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ delete  │ -p disable-driver-mounts-234978                                                                                                                                          │ disable-driver-mounts-234978 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p embed-certs-386191 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                   │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:56 UTC │
	│ image   │ old-k8s-version-992336 image list --format=json                                                                                                                          │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ pause   │ -p old-k8s-version-992336 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ delete  │ -p old-k8s-version-992336                                                                                                                                                │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:56 UTC │
	│ delete  │ -p old-k8s-version-992336                                                                                                                                                │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ image   │ no-preload-336331 image list --format=json                                                                                                                               │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ pause   │ -p no-preload-336331 --alsologtostderr -v=1                                                                                                                              │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │                     │
	│ delete  │ -p no-preload-336331                                                                                                                                                     │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ delete  │ -p no-preload-336331                                                                                                                                                     │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ addons  │ enable metrics-server -p embed-certs-386191 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │                     │
	│ image   │ default-k8s-diff-port-997805 image list --format=json                                                                                                                    │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ stop    │ -p embed-certs-386191 --alsologtostderr -v=3                                                                                                                             │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:57 UTC │
	│ pause   │ -p default-k8s-diff-port-997805 --alsologtostderr -v=1                                                                                                                   │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-997805                                                                                                                                          │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ delete  │ -p default-k8s-diff-port-997805                                                                                                                                          │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-386191 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:57 UTC │ 02 Dec 25 20:57 UTC │
	│ start   │ -p embed-certs-386191 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                   │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:57 UTC │ 02 Dec 25 20:57 UTC │
	│ image   │ embed-certs-386191 image list --format=json                                                                                                                              │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:58 UTC │ 02 Dec 25 20:58 UTC │
	│ pause   │ -p embed-certs-386191 --alsologtostderr -v=1                                                                                                                             │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:58 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:57:03
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:57:03.568914  774232 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:57:03.569228  774232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:57:03.569238  774232 out.go:374] Setting ErrFile to fd 2...
	I1202 20:57:03.569243  774232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:57:03.569469  774232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:57:03.569936  774232 out.go:368] Setting JSON to false
	I1202 20:57:03.571099  774232 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9568,"bootTime":1764699456,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:57:03.571176  774232 start.go:143] virtualization: kvm guest
	I1202 20:57:03.574199  774232 out.go:179] * [embed-certs-386191] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 20:57:03.575811  774232 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:57:03.575844  774232 notify.go:221] Checking for updates...
	I1202 20:57:03.578362  774232 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:57:03.580018  774232 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:57:03.584349  774232 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 20:57:03.585795  774232 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:57:03.587098  774232 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:57:03.588892  774232 config.go:182] Loaded profile config "embed-certs-386191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:57:03.589445  774232 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:57:03.613212  774232 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 20:57:03.613340  774232 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:57:03.672046  774232 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:44 SystemTime:2025-12-02 20:57:03.662081286 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:57:03.672188  774232 docker.go:319] overlay module found
	I1202 20:57:03.674861  774232 out.go:179] * Using the docker driver based on existing profile
	I1202 20:57:03.676435  774232 start.go:309] selected driver: docker
	I1202 20:57:03.676454  774232 start.go:927] validating driver "docker" against &{Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:57:03.676549  774232 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:57:03.677183  774232 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:57:03.734034  774232 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:44 SystemTime:2025-12-02 20:57:03.724613595 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:57:03.734373  774232 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:57:03.734411  774232 cni.go:84] Creating CNI manager for ""
	I1202 20:57:03.734477  774232 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:57:03.734515  774232 start.go:353] cluster config:
	{Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:57:03.736340  774232 out.go:179] * Starting "embed-certs-386191" primary control-plane node in "embed-certs-386191" cluster
	I1202 20:57:03.737738  774232 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 20:57:03.739334  774232 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 20:57:03.740648  774232 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:57:03.740693  774232 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 20:57:03.740696  774232 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 20:57:03.740735  774232 cache.go:65] Caching tarball of preloaded images
	I1202 20:57:03.740883  774232 preload.go:238] Found /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 20:57:03.740914  774232 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 20:57:03.741042  774232 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/config.json ...
	I1202 20:57:03.762164  774232 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 20:57:03.762185  774232 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 20:57:03.762202  774232 cache.go:243] Successfully downloaded all kic artifacts
	I1202 20:57:03.762237  774232 start.go:360] acquireMachinesLock for embed-certs-386191: {Name:mk07b451c8d7193712ed79603183bf03b141f2ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:57:03.762295  774232 start.go:364] duration metric: took 38.667µs to acquireMachinesLock for "embed-certs-386191"
	I1202 20:57:03.762311  774232 start.go:96] Skipping create...Using existing machine configuration
	I1202 20:57:03.762318  774232 fix.go:54] fixHost starting: 
	I1202 20:57:03.762528  774232 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:57:03.781195  774232 fix.go:112] recreateIfNeeded on embed-certs-386191: state=Stopped err=<nil>
	W1202 20:57:03.781249  774232 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 20:57:03.783099  774232 out.go:252] * Restarting existing docker container for "embed-certs-386191" ...
	I1202 20:57:03.783194  774232 cli_runner.go:164] Run: docker start embed-certs-386191
	I1202 20:57:04.040060  774232 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:57:04.060513  774232 kic.go:430] container "embed-certs-386191" state is running.
	I1202 20:57:04.060964  774232 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-386191
	I1202 20:57:04.080024  774232 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/config.json ...
	I1202 20:57:04.080418  774232 machine.go:94] provisionDockerMachine start ...
	I1202 20:57:04.080523  774232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:57:04.100700  774232 main.go:143] libmachine: Using SSH client type: native
	I1202 20:57:04.101007  774232 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I1202 20:57:04.101024  774232 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:57:04.101723  774232 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34366->127.0.0.1:33518: read: connection reset by peer
	I1202 20:57:07.244692  774232 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-386191
	
	I1202 20:57:07.244732  774232 ubuntu.go:182] provisioning hostname "embed-certs-386191"
	I1202 20:57:07.244811  774232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:57:07.264773  774232 main.go:143] libmachine: Using SSH client type: native
	I1202 20:57:07.265037  774232 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I1202 20:57:07.265057  774232 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-386191 && echo "embed-certs-386191" | sudo tee /etc/hostname
	I1202 20:57:07.417302  774232 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-386191
	
	I1202 20:57:07.417376  774232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:57:07.436562  774232 main.go:143] libmachine: Using SSH client type: native
	I1202 20:57:07.436797  774232 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I1202 20:57:07.436815  774232 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-386191' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-386191/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-386191' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:57:07.579568  774232 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:57:07.579608  774232 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-407427/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-407427/.minikube}
	I1202 20:57:07.579633  774232 ubuntu.go:190] setting up certificates
	I1202 20:57:07.579646  774232 provision.go:84] configureAuth start
	I1202 20:57:07.579715  774232 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-386191
	I1202 20:57:07.599317  774232 provision.go:143] copyHostCerts
	I1202 20:57:07.599396  774232 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem, removing ...
	I1202 20:57:07.599412  774232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem
	I1202 20:57:07.599511  774232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem (1123 bytes)
	I1202 20:57:07.599683  774232 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem, removing ...
	I1202 20:57:07.599697  774232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem
	I1202 20:57:07.599755  774232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem (1675 bytes)
	I1202 20:57:07.599859  774232 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem, removing ...
	I1202 20:57:07.599868  774232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem
	I1202 20:57:07.599960  774232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem (1082 bytes)
	I1202 20:57:07.600081  774232 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem org=jenkins.embed-certs-386191 san=[127.0.0.1 192.168.103.2 embed-certs-386191 localhost minikube]
	I1202 20:57:07.648058  774232 provision.go:177] copyRemoteCerts
	I1202 20:57:07.648157  774232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:57:07.648228  774232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:57:07.667174  774232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:57:07.768998  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:57:07.788122  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 20:57:07.807387  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 20:57:07.826355  774232 provision.go:87] duration metric: took 246.694362ms to configureAuth
	I1202 20:57:07.826383  774232 ubuntu.go:206] setting minikube options for container-runtime
	I1202 20:57:07.826543  774232 config.go:182] Loaded profile config "embed-certs-386191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:57:07.826653  774232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:57:07.845598  774232 main.go:143] libmachine: Using SSH client type: native
	I1202 20:57:07.845893  774232 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I1202 20:57:07.845910  774232 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:57:08.184581  774232 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:57:08.184613  774232 machine.go:97] duration metric: took 4.104173799s to provisionDockerMachine
	I1202 20:57:08.184630  774232 start.go:293] postStartSetup for "embed-certs-386191" (driver="docker")
	I1202 20:57:08.184645  774232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:57:08.184730  774232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:57:08.184795  774232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:57:08.204944  774232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:57:08.305914  774232 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:57:08.309766  774232 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:57:08.309809  774232 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:57:08.309823  774232 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 20:57:08.309877  774232 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 20:57:08.309985  774232 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem -> 4110322.pem in /etc/ssl/certs
	I1202 20:57:08.310090  774232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:57:08.318141  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:57:08.336773  774232 start.go:296] duration metric: took 152.121205ms for postStartSetup
	I1202 20:57:08.336865  774232 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:57:08.336915  774232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:57:08.356216  774232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:57:08.454737  774232 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:57:08.459772  774232 fix.go:56] duration metric: took 4.697444869s for fixHost
	I1202 20:57:08.459799  774232 start.go:83] releasing machines lock for "embed-certs-386191", held for 4.697494598s
	I1202 20:57:08.459884  774232 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-386191
	I1202 20:57:08.478583  774232 ssh_runner.go:195] Run: cat /version.json
	I1202 20:57:08.478654  774232 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:57:08.478674  774232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:57:08.478721  774232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:57:08.498926  774232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:57:08.499314  774232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:57:08.652660  774232 ssh_runner.go:195] Run: systemctl --version
	I1202 20:57:08.659775  774232 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:57:08.696845  774232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:57:08.701831  774232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:57:08.701946  774232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:57:08.710279  774232 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 20:57:08.710306  774232 start.go:496] detecting cgroup driver to use...
	I1202 20:57:08.710340  774232 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 20:57:08.710421  774232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:57:08.725534  774232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:57:08.739105  774232 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:57:08.739195  774232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:57:08.754305  774232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:57:08.768174  774232 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:57:08.849581  774232 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:57:08.929908  774232 docker.go:234] disabling docker service ...
	I1202 20:57:08.929985  774232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:57:08.944745  774232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:57:08.958289  774232 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:57:09.040440  774232 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:57:09.121679  774232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:57:09.135343  774232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:57:09.150391  774232 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:57:09.150454  774232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:57:09.160229  774232 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 20:57:09.160309  774232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:57:09.169758  774232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:57:09.179178  774232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:57:09.188470  774232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:57:09.197290  774232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:57:09.207458  774232 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:57:09.216903  774232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:57:09.226360  774232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:57:09.234532  774232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:57:09.242859  774232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:57:09.326255  774232 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:57:09.470049  774232 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:57:09.470136  774232 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:57:09.474392  774232 start.go:564] Will wait 60s for crictl version
	I1202 20:57:09.474451  774232 ssh_runner.go:195] Run: which crictl
	I1202 20:57:09.478445  774232 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:57:09.504698  774232 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:57:09.504770  774232 ssh_runner.go:195] Run: crio --version
	I1202 20:57:09.534114  774232 ssh_runner.go:195] Run: crio --version
	I1202 20:57:09.566764  774232 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 20:57:09.568218  774232 cli_runner.go:164] Run: docker network inspect embed-certs-386191 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:57:09.587321  774232 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1202 20:57:09.592089  774232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:57:09.604537  774232 kubeadm.go:884] updating cluster {Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:57:09.604663  774232 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:57:09.604705  774232 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:57:09.638386  774232 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:57:09.638408  774232 crio.go:433] Images already preloaded, skipping extraction
	I1202 20:57:09.638469  774232 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:57:09.664533  774232 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:57:09.664556  774232 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:57:09.664564  774232 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1202 20:57:09.664668  774232 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-386191 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:57:09.664730  774232 ssh_runner.go:195] Run: crio config
	I1202 20:57:09.712997  774232 cni.go:84] Creating CNI manager for ""
	I1202 20:57:09.713020  774232 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:57:09.713040  774232 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:57:09.713063  774232 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-386191 NodeName:embed-certs-386191 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:57:09.713233  774232 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-386191"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:57:09.713300  774232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 20:57:09.721845  774232 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:57:09.721930  774232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:57:09.730141  774232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1202 20:57:09.743834  774232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:57:09.757119  774232 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1202 20:57:09.770727  774232 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1202 20:57:09.774934  774232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:57:09.785672  774232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:57:09.865745  774232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:57:09.889834  774232 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191 for IP: 192.168.103.2
	I1202 20:57:09.889866  774232 certs.go:195] generating shared ca certs ...
	I1202 20:57:09.889885  774232 certs.go:227] acquiring lock for ca certs: {Name:mkea11924193dd4dc459e16d70207bde383196df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:57:09.890103  774232 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key
	I1202 20:57:09.890169  774232 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key
	I1202 20:57:09.890182  774232 certs.go:257] generating profile certs ...
	I1202 20:57:09.890312  774232 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.key
	I1202 20:57:09.890401  774232 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key.1b423d29
	I1202 20:57:09.890456  774232 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.key
	I1202 20:57:09.890593  774232 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem (1338 bytes)
	W1202 20:57:09.890638  774232 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032_empty.pem, impossibly tiny 0 bytes
	I1202 20:57:09.890652  774232 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 20:57:09.890692  774232 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:57:09.890723  774232 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:57:09.890768  774232 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem (1675 bytes)
	I1202 20:57:09.890824  774232 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:57:09.891720  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:57:09.911874  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:57:09.933174  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:57:09.954654  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:57:09.980887  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1202 20:57:10.000742  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 20:57:10.020847  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:57:10.039958  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 20:57:10.059657  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:57:10.078374  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem --> /usr/share/ca-certificates/411032.pem (1338 bytes)
	I1202 20:57:10.098039  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /usr/share/ca-certificates/4110322.pem (1708 bytes)
	I1202 20:57:10.116307  774232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:57:10.130357  774232 ssh_runner.go:195] Run: openssl version
	I1202 20:57:10.137246  774232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:57:10.146555  774232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:57:10.150803  774232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:57:10.150871  774232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:57:10.186704  774232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:57:10.195739  774232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/411032.pem && ln -fs /usr/share/ca-certificates/411032.pem /etc/ssl/certs/411032.pem"
	I1202 20:57:10.206054  774232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/411032.pem
	I1202 20:57:10.210403  774232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 20:13 /usr/share/ca-certificates/411032.pem
	I1202 20:57:10.210459  774232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/411032.pem
	I1202 20:57:10.244732  774232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/411032.pem /etc/ssl/certs/51391683.0"
	I1202 20:57:10.253457  774232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110322.pem && ln -fs /usr/share/ca-certificates/4110322.pem /etc/ssl/certs/4110322.pem"
	I1202 20:57:10.262832  774232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110322.pem
	I1202 20:57:10.267196  774232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 20:13 /usr/share/ca-certificates/4110322.pem
	I1202 20:57:10.267281  774232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110322.pem
	I1202 20:57:10.303119  774232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4110322.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:57:10.312123  774232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:57:10.316373  774232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 20:57:10.351651  774232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 20:57:10.388448  774232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 20:57:10.435290  774232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 20:57:10.482746  774232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 20:57:10.536454  774232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 20:57:10.595716  774232 kubeadm.go:401] StartCluster: {Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:57:10.595861  774232 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:57:10.595945  774232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:57:10.627116  774232 cri.go:89] found id: "977a5d34d10349633d8b109d327cf440d676aa5501596ec9742db0005680b6ea"
	I1202 20:57:10.627141  774232 cri.go:89] found id: "7bbd6132314dd50edb345c367cfd40b9555ce01487136490278226bf20c9869c"
	I1202 20:57:10.627147  774232 cri.go:89] found id: "bb42ebc0538d2d4002108a87aba40e3d0ac601e9d3e24c09df1bd4436d20d164"
	I1202 20:57:10.627152  774232 cri.go:89] found id: "2d91f220a3e5c81f5d5d8cdae53244fd20a45b32d3e9cef96c94d22f621da68c"
	I1202 20:57:10.627155  774232 cri.go:89] found id: ""
	I1202 20:57:10.627205  774232 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 20:57:10.639688  774232 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:57:10Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:57:10.639772  774232 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:57:10.648943  774232 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 20:57:10.648966  774232 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 20:57:10.649019  774232 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 20:57:10.657586  774232 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:57:10.658006  774232 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-386191" does not appear in /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:57:10.658141  774232 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-407427/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-386191" cluster setting kubeconfig missing "embed-certs-386191" context setting]
	I1202 20:57:10.658440  774232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:57:10.659671  774232 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 20:57:10.668787  774232 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1202 20:57:10.668831  774232 kubeadm.go:602] duration metric: took 19.85856ms to restartPrimaryControlPlane
	I1202 20:57:10.668844  774232 kubeadm.go:403] duration metric: took 73.144155ms to StartCluster
	I1202 20:57:10.668866  774232 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:57:10.668947  774232 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:57:10.670108  774232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:57:10.670362  774232 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:57:10.670438  774232 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:57:10.670547  774232 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-386191"
	I1202 20:57:10.670574  774232 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-386191"
	W1202 20:57:10.670586  774232 addons.go:248] addon storage-provisioner should already be in state true
	I1202 20:57:10.670594  774232 addons.go:70] Setting dashboard=true in profile "embed-certs-386191"
	I1202 20:57:10.670613  774232 addons.go:239] Setting addon dashboard=true in "embed-certs-386191"
	W1202 20:57:10.670621  774232 addons.go:248] addon dashboard should already be in state true
	I1202 20:57:10.670621  774232 host.go:66] Checking if "embed-certs-386191" exists ...
	I1202 20:57:10.670619  774232 config.go:182] Loaded profile config "embed-certs-386191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:57:10.670627  774232 addons.go:70] Setting default-storageclass=true in profile "embed-certs-386191"
	I1202 20:57:10.670645  774232 host.go:66] Checking if "embed-certs-386191" exists ...
	I1202 20:57:10.670663  774232 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-386191"
	I1202 20:57:10.671045  774232 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:57:10.671190  774232 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:57:10.671213  774232 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:57:10.672419  774232 out.go:179] * Verifying Kubernetes components...
	I1202 20:57:10.673823  774232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:57:10.699057  774232 addons.go:239] Setting addon default-storageclass=true in "embed-certs-386191"
	W1202 20:57:10.699103  774232 addons.go:248] addon default-storageclass should already be in state true
	I1202 20:57:10.699138  774232 host.go:66] Checking if "embed-certs-386191" exists ...
	I1202 20:57:10.699784  774232 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:57:10.699783  774232 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 20:57:10.700586  774232 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:57:10.704270  774232 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:57:10.704292  774232 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:57:10.704299  774232 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 20:57:10.704357  774232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:57:10.706434  774232 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 20:57:10.706458  774232 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 20:57:10.706512  774232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:57:10.729620  774232 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:57:10.729645  774232 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:57:10.729717  774232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:57:10.740552  774232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:57:10.741769  774232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:57:10.757420  774232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:57:10.835616  774232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:57:10.851324  774232 node_ready.go:35] waiting up to 6m0s for node "embed-certs-386191" to be "Ready" ...
	I1202 20:57:10.858797  774232 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 20:57:10.858823  774232 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 20:57:10.859643  774232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:57:10.870825  774232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:57:10.875378  774232 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 20:57:10.875407  774232 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 20:57:10.892562  774232 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 20:57:10.892708  774232 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 20:57:10.909628  774232 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 20:57:10.909654  774232 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 20:57:10.928318  774232 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 20:57:10.928351  774232 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 20:57:10.943558  774232 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 20:57:10.943584  774232 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 20:57:10.958869  774232 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 20:57:10.958898  774232 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 20:57:10.974913  774232 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 20:57:10.974944  774232 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 20:57:10.987816  774232 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:57:10.987849  774232 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 20:57:11.004038  774232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:57:12.172221  774232 node_ready.go:49] node "embed-certs-386191" is "Ready"
	I1202 20:57:12.172281  774232 node_ready.go:38] duration metric: took 1.320900145s for node "embed-certs-386191" to be "Ready" ...
	I1202 20:57:12.172300  774232 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:57:12.172360  774232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:57:12.704939  774232 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.84526039s)
	I1202 20:57:12.705016  774232 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.834127091s)
	I1202 20:57:12.705189  774232 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.701112937s)
	I1202 20:57:12.705251  774232 api_server.go:72] duration metric: took 2.034851649s to wait for apiserver process to appear ...
	I1202 20:57:12.705408  774232 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:57:12.705427  774232 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 20:57:12.707214  774232 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-386191 addons enable metrics-server
	
	I1202 20:57:12.712812  774232 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 20:57:12.712839  774232 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 20:57:12.718943  774232 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1202 20:57:12.720446  774232 addons.go:530] duration metric: took 2.050020615s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1202 20:57:13.205727  774232 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 20:57:13.211193  774232 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 20:57:13.211231  774232 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 20:57:13.706040  774232 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 20:57:13.711063  774232 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1202 20:57:13.712524  774232 api_server.go:141] control plane version: v1.34.2
	I1202 20:57:13.712558  774232 api_server.go:131] duration metric: took 1.007141254s to wait for apiserver health ...
	I1202 20:57:13.712570  774232 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:57:13.716513  774232 system_pods.go:59] 8 kube-system pods found
	I1202 20:57:13.716559  774232 system_pods.go:61] "coredns-66bc5c9577-q6l9x" [e7159eb1-3cde-437a-99e3-760c9c397977] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:57:13.716572  774232 system_pods.go:61] "etcd-embed-certs-386191" [5ca26226-7c69-4d6b-a513-2187c089d96c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:57:13.716584  774232 system_pods.go:61] "kindnet-x9jsh" [410369de-877d-46e5-8f7c-cd8076d1d2f5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 20:57:13.716604  774232 system_pods.go:61] "kube-apiserver-embed-certs-386191" [dd2a96bf-6afe-43b8-b01d-a88496c7c9b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:57:13.716611  774232 system_pods.go:61] "kube-controller-manager-embed-certs-386191" [ba51ea44-285e-494e-a173-75f314cb6a5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:57:13.716619  774232 system_pods.go:61] "kube-proxy-854r8" [6c9652b0-217c-466f-9345-7364f0e39936] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 20:57:13.716627  774232 system_pods.go:61] "kube-scheduler-embed-certs-386191" [abe313cb-061b-4d65-b0e0-f369aacdc1c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:57:13.716632  774232 system_pods.go:61] "storage-provisioner" [d37e55bb-bb1f-4659-a9c5-14d47011bd23] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:57:13.716641  774232 system_pods.go:74] duration metric: took 4.063952ms to wait for pod list to return data ...
	I1202 20:57:13.716653  774232 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:57:13.719337  774232 default_sa.go:45] found service account: "default"
	I1202 20:57:13.719363  774232 default_sa.go:55] duration metric: took 2.699939ms for default service account to be created ...
	I1202 20:57:13.719375  774232 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 20:57:13.722423  774232 system_pods.go:86] 8 kube-system pods found
	I1202 20:57:13.722455  774232 system_pods.go:89] "coredns-66bc5c9577-q6l9x" [e7159eb1-3cde-437a-99e3-760c9c397977] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:57:13.722463  774232 system_pods.go:89] "etcd-embed-certs-386191" [5ca26226-7c69-4d6b-a513-2187c089d96c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:57:13.722471  774232 system_pods.go:89] "kindnet-x9jsh" [410369de-877d-46e5-8f7c-cd8076d1d2f5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 20:57:13.722485  774232 system_pods.go:89] "kube-apiserver-embed-certs-386191" [dd2a96bf-6afe-43b8-b01d-a88496c7c9b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:57:13.722497  774232 system_pods.go:89] "kube-controller-manager-embed-certs-386191" [ba51ea44-285e-494e-a173-75f314cb6a5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:57:13.722503  774232 system_pods.go:89] "kube-proxy-854r8" [6c9652b0-217c-466f-9345-7364f0e39936] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 20:57:13.722510  774232 system_pods.go:89] "kube-scheduler-embed-certs-386191" [abe313cb-061b-4d65-b0e0-f369aacdc1c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:57:13.722515  774232 system_pods.go:89] "storage-provisioner" [d37e55bb-bb1f-4659-a9c5-14d47011bd23] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:57:13.722527  774232 system_pods.go:126] duration metric: took 3.142689ms to wait for k8s-apps to be running ...
	I1202 20:57:13.722536  774232 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 20:57:13.722580  774232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:57:13.736470  774232 system_svc.go:56] duration metric: took 13.924054ms WaitForService to wait for kubelet
	I1202 20:57:13.736499  774232 kubeadm.go:587] duration metric: took 3.066103339s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:57:13.736524  774232 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:57:13.739874  774232 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 20:57:13.739902  774232 node_conditions.go:123] node cpu capacity is 8
	I1202 20:57:13.739915  774232 node_conditions.go:105] duration metric: took 3.385954ms to run NodePressure ...
	I1202 20:57:13.739928  774232 start.go:242] waiting for startup goroutines ...
	I1202 20:57:13.739939  774232 start.go:247] waiting for cluster config update ...
	I1202 20:57:13.739952  774232 start.go:256] writing updated cluster config ...
	I1202 20:57:13.740326  774232 ssh_runner.go:195] Run: rm -f paused
	I1202 20:57:13.744613  774232 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:57:13.748215  774232 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q6l9x" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 20:57:15.754394  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:17.756805  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:20.254748  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:22.754847  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:24.754901  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:27.254021  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:29.254601  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:31.754661  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:34.255139  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:36.753491  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:38.754840  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:40.756205  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:43.253901  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:45.254849  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:47.754159  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	I1202 20:57:49.254583  774232 pod_ready.go:94] pod "coredns-66bc5c9577-q6l9x" is "Ready"
	I1202 20:57:49.254616  774232 pod_ready.go:86] duration metric: took 35.506377539s for pod "coredns-66bc5c9577-q6l9x" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:57:49.257369  774232 pod_ready.go:83] waiting for pod "etcd-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:57:49.261904  774232 pod_ready.go:94] pod "etcd-embed-certs-386191" is "Ready"
	I1202 20:57:49.261934  774232 pod_ready.go:86] duration metric: took 4.541022ms for pod "etcd-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:57:49.264267  774232 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:57:49.268801  774232 pod_ready.go:94] pod "kube-apiserver-embed-certs-386191" is "Ready"
	I1202 20:57:49.268959  774232 pod_ready.go:86] duration metric: took 4.661362ms for pod "kube-apiserver-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:57:49.271959  774232 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:57:49.453416  774232 pod_ready.go:94] pod "kube-controller-manager-embed-certs-386191" is "Ready"
	I1202 20:57:49.453449  774232 pod_ready.go:86] duration metric: took 181.463804ms for pod "kube-controller-manager-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:57:49.652760  774232 pod_ready.go:83] waiting for pod "kube-proxy-854r8" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:57:50.053271  774232 pod_ready.go:94] pod "kube-proxy-854r8" is "Ready"
	I1202 20:57:50.053306  774232 pod_ready.go:86] duration metric: took 400.519526ms for pod "kube-proxy-854r8" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:57:50.252936  774232 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:57:50.652636  774232 pod_ready.go:94] pod "kube-scheduler-embed-certs-386191" is "Ready"
	I1202 20:57:50.652666  774232 pod_ready.go:86] duration metric: took 399.70043ms for pod "kube-scheduler-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:57:50.652680  774232 pod_ready.go:40] duration metric: took 36.908030477s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:57:50.696685  774232 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 20:57:50.698749  774232 out.go:179] * Done! kubectl is now configured to use "embed-certs-386191" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 20:57:23 embed-certs-386191 crio[575]: time="2025-12-02T20:57:23.825651656Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 20:57:23 embed-certs-386191 crio[575]: time="2025-12-02T20:57:23.829389193Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 20:57:23 embed-certs-386191 crio[575]: time="2025-12-02T20:57:23.829417807Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 20:57:39 embed-certs-386191 crio[575]: time="2025-12-02T20:57:39.984375206Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d301a522-aac8-495a-a658-cac992397cbd name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:57:39 embed-certs-386191 crio[575]: time="2025-12-02T20:57:39.985417968Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e73569d3-0951-42e8-ae36-3f65bc8c98f7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:57:39 embed-certs-386191 crio[575]: time="2025-12-02T20:57:39.986430445Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfkqk/dashboard-metrics-scraper" id=f4db0d92-9ebc-4b60-a362-09f7381d017d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:57:39 embed-certs-386191 crio[575]: time="2025-12-02T20:57:39.986569952Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:57:39 embed-certs-386191 crio[575]: time="2025-12-02T20:57:39.991677319Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:57:39 embed-certs-386191 crio[575]: time="2025-12-02T20:57:39.992284782Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:57:40 embed-certs-386191 crio[575]: time="2025-12-02T20:57:40.025597955Z" level=info msg="Created container 8a430412e5cdd121d367b7b4d53b1fa49127fabd0127bc78bee44ec9f14c657b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfkqk/dashboard-metrics-scraper" id=f4db0d92-9ebc-4b60-a362-09f7381d017d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:57:40 embed-certs-386191 crio[575]: time="2025-12-02T20:57:40.02651668Z" level=info msg="Starting container: 8a430412e5cdd121d367b7b4d53b1fa49127fabd0127bc78bee44ec9f14c657b" id=bf411397-cc2c-4e98-861e-cfee396a835b name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:57:40 embed-certs-386191 crio[575]: time="2025-12-02T20:57:40.028521177Z" level=info msg="Started container" PID=1760 containerID=8a430412e5cdd121d367b7b4d53b1fa49127fabd0127bc78bee44ec9f14c657b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfkqk/dashboard-metrics-scraper id=bf411397-cc2c-4e98-861e-cfee396a835b name=/runtime.v1.RuntimeService/StartContainer sandboxID=01e6db61986fab83443cc55fad85a1f9f1bfdbe21c74b1eec97433a68fd702f2
	Dec 02 20:57:40 embed-certs-386191 crio[575]: time="2025-12-02T20:57:40.096458732Z" level=info msg="Removing container: eb2020f7201b4a1980049db1cf35098ef46e7e67661b129617368e9376bf461c" id=d30af833-3f90-4def-bf3e-d91c145723bc name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 20:57:40 embed-certs-386191 crio[575]: time="2025-12-02T20:57:40.107476084Z" level=info msg="Removed container eb2020f7201b4a1980049db1cf35098ef46e7e67661b129617368e9376bf461c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfkqk/dashboard-metrics-scraper" id=d30af833-3f90-4def-bf3e-d91c145723bc name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 20:57:44 embed-certs-386191 crio[575]: time="2025-12-02T20:57:44.108358Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=83d07edb-5a83-4fd1-a115-02b7bc152467 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:57:44 embed-certs-386191 crio[575]: time="2025-12-02T20:57:44.109457879Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a1144362-ea78-444d-9b52-1133120da854 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:57:44 embed-certs-386191 crio[575]: time="2025-12-02T20:57:44.11074607Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=76b8de18-d6d7-46a2-a855-276c4ea7403f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:57:44 embed-certs-386191 crio[575]: time="2025-12-02T20:57:44.110894547Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:57:44 embed-certs-386191 crio[575]: time="2025-12-02T20:57:44.115357242Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:57:44 embed-certs-386191 crio[575]: time="2025-12-02T20:57:44.115516239Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ba1318178a3df6e0c4a5c4b264cae8c03e300763c1e21e97ecc298030e5fb2a2/merged/etc/passwd: no such file or directory"
	Dec 02 20:57:44 embed-certs-386191 crio[575]: time="2025-12-02T20:57:44.115534099Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ba1318178a3df6e0c4a5c4b264cae8c03e300763c1e21e97ecc298030e5fb2a2/merged/etc/group: no such file or directory"
	Dec 02 20:57:44 embed-certs-386191 crio[575]: time="2025-12-02T20:57:44.115802479Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:57:44 embed-certs-386191 crio[575]: time="2025-12-02T20:57:44.144736192Z" level=info msg="Created container fbbb7718121bf867e159d5cd1a6bf1edd51a7b976076819722430cf4282dc5dd: kube-system/storage-provisioner/storage-provisioner" id=76b8de18-d6d7-46a2-a855-276c4ea7403f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:57:44 embed-certs-386191 crio[575]: time="2025-12-02T20:57:44.145560245Z" level=info msg="Starting container: fbbb7718121bf867e159d5cd1a6bf1edd51a7b976076819722430cf4282dc5dd" id=3be3a5f9-1b4b-4a0e-ab53-0d73bd70359b name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:57:44 embed-certs-386191 crio[575]: time="2025-12-02T20:57:44.147456367Z" level=info msg="Started container" PID=1774 containerID=fbbb7718121bf867e159d5cd1a6bf1edd51a7b976076819722430cf4282dc5dd description=kube-system/storage-provisioner/storage-provisioner id=3be3a5f9-1b4b-4a0e-ab53-0d73bd70359b name=/runtime.v1.RuntimeService/StartContainer sandboxID=d1bc9f770cc98ee34b17735bf561cf361e0d0b1495d891d1ab151351c9dbf394
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	fbbb7718121bf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   d1bc9f770cc98       storage-provisioner                          kube-system
	8a430412e5cdd       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago      Exited              dashboard-metrics-scraper   2                   01e6db61986fa       dashboard-metrics-scraper-6ffb444bf9-wfkqk   kubernetes-dashboard
	3ca826e1199be       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   a407dc1ce9ba8       kubernetes-dashboard-855c9754f9-zkxsp        kubernetes-dashboard
	fe0d3771d27b3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   4ef439d163c3e       coredns-66bc5c9577-q6l9x                     kube-system
	da9e328e7a69d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   87413d3618010       busybox                                      default
	8c1d49372b9e2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   50479fcfafba4       kindnet-x9jsh                                kube-system
	3e2d26d4dcdd3       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           52 seconds ago      Running             kube-proxy                  0                   a7ec21b6ece66       kube-proxy-854r8                             kube-system
	b5ab2cecc2685       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   d1bc9f770cc98       storage-provisioner                          kube-system
	977a5d34d1034       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           55 seconds ago      Running             kube-apiserver              0                   f6dd2712e995d       kube-apiserver-embed-certs-386191            kube-system
	7bbd6132314dd       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           55 seconds ago      Running             kube-controller-manager     0                   b45cc06e5246c       kube-controller-manager-embed-certs-386191   kube-system
	bb42ebc0538d2       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           55 seconds ago      Running             etcd                        0                   050fe7b3adcf1       etcd-embed-certs-386191                      kube-system
	2d91f220a3e5c       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           55 seconds ago      Running             kube-scheduler              0                   a1d3646f4ea8d       kube-scheduler-embed-certs-386191            kube-system
	
	
	==> coredns [fe0d3771d27b3c21233deb323722886a95d260e7637dc80599a483422f200d04] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37930 - 2031 "HINFO IN 1010263415870260272.6391674541796422016. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026700371s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-386191
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-386191
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=embed-certs-386191
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T20_56_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 20:56:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-386191
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:57:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:57:42 +0000   Tue, 02 Dec 2025 20:56:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:57:42 +0000   Tue, 02 Dec 2025 20:56:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:57:42 +0000   Tue, 02 Dec 2025 20:56:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:57:42 +0000   Tue, 02 Dec 2025 20:56:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-386191
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                f83f142d-7c61-4329-95b4-56ae3cea973b
	  Boot ID:                    9dd5456f-e394-4b4b-9458-48d26faf507e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-q6l9x                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-embed-certs-386191                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-x9jsh                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-embed-certs-386191             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-embed-certs-386191    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-854r8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-embed-certs-386191             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wfkqk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zkxsp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  113s               kubelet          Node embed-certs-386191 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s               kubelet          Node embed-certs-386191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s               kubelet          Node embed-certs-386191 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s               node-controller  Node embed-certs-386191 event: Registered Node embed-certs-386191 in Controller
	  Normal  NodeReady                95s                kubelet          Node embed-certs-386191 status is now: NodeReady
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 56s)  kubelet          Node embed-certs-386191 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 56s)  kubelet          Node embed-certs-386191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 56s)  kubelet          Node embed-certs-386191 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                node-controller  Node embed-certs-386191 event: Registered Node embed-certs-386191 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[ +32.254238] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[Dec 2 20:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 03 bd 14 45 8a 08 06
	[  +0.000590] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 96 27 ad 0d 40 04 08 06
	[Dec 2 20:53] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	[  +0.000700] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 e4 ba c0 78 5f 08 06
	[ +10.119645] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000022] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[  +2.447166] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 df 09 53 d6 6e 08 06
	[  +0.000374] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 8d 06 71 0a 5e 08 06
	[Dec 2 20:54] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 12 47 13 50 f6 bc 08 06
	[  +0.001523] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[ +22.123549] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 0d 45 06 42 2a 08 06
	[  +0.000389] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	
	
	==> etcd [bb42ebc0538d2d4002108a87aba40e3d0ac601e9d3e24c09df1bd4436d20d164] <==
	{"level":"warn","ts":"2025-12-02T20:57:11.535154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.542092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.552304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.559446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.567131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.574371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.581187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.588176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.594864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.603162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.610280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.617730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.625164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.633656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.648313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.656728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.665222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.673351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.681101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.688306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.706371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.710238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.717260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.724526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.773044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59810","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:58:05 up  2:40,  0 user,  load average: 1.58, 3.27, 2.56
	Linux embed-certs-386191 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8c1d49372b9e24a9b37e0b6123939de40a38bc39fb2d3b737f65fd8154b00adb] <==
	I1202 20:57:13.511354       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 20:57:13.511585       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1202 20:57:13.511752       1 main.go:148] setting mtu 1500 for CNI 
	I1202 20:57:13.511768       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 20:57:13.511788       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T20:57:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 20:57:13.811845       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 20:57:13.812319       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 20:57:13.812344       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 20:57:13.812507       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 20:57:14.307157       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 20:57:14.307458       1 metrics.go:72] Registering metrics
	I1202 20:57:14.307537       1 controller.go:711] "Syncing nftables rules"
	I1202 20:57:23.812734       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1202 20:57:23.812813       1 main.go:301] handling current node
	I1202 20:57:33.815418       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1202 20:57:33.815465       1 main.go:301] handling current node
	I1202 20:57:43.812289       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1202 20:57:43.812341       1 main.go:301] handling current node
	I1202 20:57:53.815844       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1202 20:57:53.815881       1 main.go:301] handling current node
	I1202 20:58:03.820149       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1202 20:58:03.820184       1 main.go:301] handling current node
	
	
	==> kube-apiserver [977a5d34d10349633d8b109d327cf440d676aa5501596ec9742db0005680b6ea] <==
	I1202 20:57:12.242623       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1202 20:57:12.242631       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 20:57:12.242693       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1202 20:57:12.242742       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1202 20:57:12.242779       1 aggregator.go:171] initial CRD sync complete...
	I1202 20:57:12.242788       1 autoregister_controller.go:144] Starting autoregister controller
	I1202 20:57:12.242796       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 20:57:12.242804       1 cache.go:39] Caches are synced for autoregister controller
	I1202 20:57:12.243117       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1202 20:57:12.243164       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1202 20:57:12.247253       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:57:12.249279       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1202 20:57:12.249752       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1202 20:57:12.296113       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1202 20:57:12.512701       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 20:57:12.543881       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 20:57:12.567675       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 20:57:12.576840       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 20:57:12.584419       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 20:57:12.625935       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.223.192"}
	I1202 20:57:12.637516       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.28.127"}
	I1202 20:57:13.145541       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 20:57:16.020410       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 20:57:16.071645       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 20:57:16.120421       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [7bbd6132314dd50edb345c367cfd40b9555ce01487136490278226bf20c9869c] <==
	I1202 20:57:15.580404       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1202 20:57:15.582700       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1202 20:57:15.584989       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 20:57:15.587412       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1202 20:57:15.589938       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1202 20:57:15.617141       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 20:57:15.617178       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1202 20:57:15.617185       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1202 20:57:15.617244       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1202 20:57:15.617231       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1202 20:57:15.617286       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1202 20:57:15.617431       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 20:57:15.617474       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1202 20:57:15.617674       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1202 20:57:15.617730       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1202 20:57:15.620099       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 20:57:15.620116       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 20:57:15.620126       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 20:57:15.622613       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1202 20:57:15.622631       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1202 20:57:15.622878       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 20:57:15.629583       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1202 20:57:15.631847       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1202 20:57:15.634154       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1202 20:57:15.638489       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [3e2d26d4dcdd30ce3fe9e663bdd5abd2a899569ad144e98d6aec4179569df0cf] <==
	I1202 20:57:13.387217       1 server_linux.go:53] "Using iptables proxy"
	I1202 20:57:13.469866       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 20:57:13.570718       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 20:57:13.570779       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1202 20:57:13.570870       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 20:57:13.590338       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 20:57:13.590408       1 server_linux.go:132] "Using iptables Proxier"
	I1202 20:57:13.596641       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 20:57:13.597187       1 server.go:527] "Version info" version="v1.34.2"
	I1202 20:57:13.597228       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:57:13.598400       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 20:57:13.598424       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 20:57:13.598430       1 config.go:200] "Starting service config controller"
	I1202 20:57:13.598449       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 20:57:13.598464       1 config.go:106] "Starting endpoint slice config controller"
	I1202 20:57:13.598469       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 20:57:13.598484       1 config.go:309] "Starting node config controller"
	I1202 20:57:13.598499       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 20:57:13.598507       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 20:57:13.699534       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 20:57:13.699597       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 20:57:13.699662       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2d91f220a3e5c81f5d5d8cdae53244fd20a45b32d3e9cef96c94d22f621da68c] <==
	I1202 20:57:11.105107       1 serving.go:386] Generated self-signed cert in-memory
	I1202 20:57:12.223460       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1202 20:57:12.223485       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:57:12.228871       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1202 20:57:12.228905       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1202 20:57:12.228903       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:57:12.228929       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:57:12.228938       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1202 20:57:12.228966       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1202 20:57:12.229388       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 20:57:12.229441       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 20:57:12.329358       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1202 20:57:12.329421       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1202 20:57:12.329358       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 20:57:16 embed-certs-386191 kubelet[740]: I1202 20:57:16.307003     740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knklp\" (UniqueName: \"kubernetes.io/projected/ed5d6b00-eaf8-41d7-90ee-e4c7a6a3f869-kube-api-access-knklp\") pod \"kubernetes-dashboard-855c9754f9-zkxsp\" (UID: \"ed5d6b00-eaf8-41d7-90ee-e4c7a6a3f869\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zkxsp"
	Dec 02 20:57:16 embed-certs-386191 kubelet[740]: I1202 20:57:16.307249     740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmjhb\" (UniqueName: \"kubernetes.io/projected/5c418dfb-12a5-4496-b536-887e6972d44b-kube-api-access-qmjhb\") pod \"dashboard-metrics-scraper-6ffb444bf9-wfkqk\" (UID: \"5c418dfb-12a5-4496-b536-887e6972d44b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfkqk"
	Dec 02 20:57:18 embed-certs-386191 kubelet[740]: I1202 20:57:18.884982     740 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 02 20:57:19 embed-certs-386191 kubelet[740]: I1202 20:57:19.030498     740 scope.go:117] "RemoveContainer" containerID="b1f45c83e32e7813a30116c3ea39f0351e6dcd89f5d0b9e454af3de6a739e648"
	Dec 02 20:57:20 embed-certs-386191 kubelet[740]: I1202 20:57:20.035949     740 scope.go:117] "RemoveContainer" containerID="b1f45c83e32e7813a30116c3ea39f0351e6dcd89f5d0b9e454af3de6a739e648"
	Dec 02 20:57:20 embed-certs-386191 kubelet[740]: I1202 20:57:20.036174     740 scope.go:117] "RemoveContainer" containerID="eb2020f7201b4a1980049db1cf35098ef46e7e67661b129617368e9376bf461c"
	Dec 02 20:57:20 embed-certs-386191 kubelet[740]: E1202 20:57:20.036385     740 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wfkqk_kubernetes-dashboard(5c418dfb-12a5-4496-b536-887e6972d44b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfkqk" podUID="5c418dfb-12a5-4496-b536-887e6972d44b"
	Dec 02 20:57:21 embed-certs-386191 kubelet[740]: I1202 20:57:21.043141     740 scope.go:117] "RemoveContainer" containerID="eb2020f7201b4a1980049db1cf35098ef46e7e67661b129617368e9376bf461c"
	Dec 02 20:57:21 embed-certs-386191 kubelet[740]: E1202 20:57:21.043357     740 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wfkqk_kubernetes-dashboard(5c418dfb-12a5-4496-b536-887e6972d44b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfkqk" podUID="5c418dfb-12a5-4496-b536-887e6972d44b"
	Dec 02 20:57:23 embed-certs-386191 kubelet[740]: I1202 20:57:23.060282     740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zkxsp" podStartSLOduration=1.15439516 podStartE2EDuration="7.060258691s" podCreationTimestamp="2025-12-02 20:57:16 +0000 UTC" firstStartedPulling="2025-12-02 20:57:16.569291906 +0000 UTC m=+6.676862837" lastFinishedPulling="2025-12-02 20:57:22.475155443 +0000 UTC m=+12.582726368" observedRunningTime="2025-12-02 20:57:23.060091652 +0000 UTC m=+13.167662595" watchObservedRunningTime="2025-12-02 20:57:23.060258691 +0000 UTC m=+13.167829636"
	Dec 02 20:57:25 embed-certs-386191 kubelet[740]: I1202 20:57:25.495448     740 scope.go:117] "RemoveContainer" containerID="eb2020f7201b4a1980049db1cf35098ef46e7e67661b129617368e9376bf461c"
	Dec 02 20:57:25 embed-certs-386191 kubelet[740]: E1202 20:57:25.495644     740 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wfkqk_kubernetes-dashboard(5c418dfb-12a5-4496-b536-887e6972d44b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfkqk" podUID="5c418dfb-12a5-4496-b536-887e6972d44b"
	Dec 02 20:57:39 embed-certs-386191 kubelet[740]: I1202 20:57:39.983888     740 scope.go:117] "RemoveContainer" containerID="eb2020f7201b4a1980049db1cf35098ef46e7e67661b129617368e9376bf461c"
	Dec 02 20:57:40 embed-certs-386191 kubelet[740]: I1202 20:57:40.094965     740 scope.go:117] "RemoveContainer" containerID="eb2020f7201b4a1980049db1cf35098ef46e7e67661b129617368e9376bf461c"
	Dec 02 20:57:40 embed-certs-386191 kubelet[740]: I1202 20:57:40.095160     740 scope.go:117] "RemoveContainer" containerID="8a430412e5cdd121d367b7b4d53b1fa49127fabd0127bc78bee44ec9f14c657b"
	Dec 02 20:57:40 embed-certs-386191 kubelet[740]: E1202 20:57:40.095391     740 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wfkqk_kubernetes-dashboard(5c418dfb-12a5-4496-b536-887e6972d44b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfkqk" podUID="5c418dfb-12a5-4496-b536-887e6972d44b"
	Dec 02 20:57:44 embed-certs-386191 kubelet[740]: I1202 20:57:44.107943     740 scope.go:117] "RemoveContainer" containerID="b5ab2cecc26850a5fcfdda9460fabe3dfee322129d5a7ffa87daa1e4390a54cb"
	Dec 02 20:57:45 embed-certs-386191 kubelet[740]: I1202 20:57:45.495927     740 scope.go:117] "RemoveContainer" containerID="8a430412e5cdd121d367b7b4d53b1fa49127fabd0127bc78bee44ec9f14c657b"
	Dec 02 20:57:45 embed-certs-386191 kubelet[740]: E1202 20:57:45.496234     740 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wfkqk_kubernetes-dashboard(5c418dfb-12a5-4496-b536-887e6972d44b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfkqk" podUID="5c418dfb-12a5-4496-b536-887e6972d44b"
	Dec 02 20:57:56 embed-certs-386191 kubelet[740]: I1202 20:57:56.983619     740 scope.go:117] "RemoveContainer" containerID="8a430412e5cdd121d367b7b4d53b1fa49127fabd0127bc78bee44ec9f14c657b"
	Dec 02 20:57:56 embed-certs-386191 kubelet[740]: E1202 20:57:56.983831     740 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wfkqk_kubernetes-dashboard(5c418dfb-12a5-4496-b536-887e6972d44b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfkqk" podUID="5c418dfb-12a5-4496-b536-887e6972d44b"
	Dec 02 20:58:02 embed-certs-386191 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 20:58:02 embed-certs-386191 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 20:58:02 embed-certs-386191 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 20:58:02 embed-certs-386191 systemd[1]: kubelet.service: Consumed 1.837s CPU time.
	
	
	==> kubernetes-dashboard [3ca826e1199be159f228fc829ee2aa57f744353729960f312b4007dab7811bd8] <==
	2025/12/02 20:57:22 Starting overwatch
	2025/12/02 20:57:22 Using namespace: kubernetes-dashboard
	2025/12/02 20:57:22 Using in-cluster config to connect to apiserver
	2025/12/02 20:57:22 Using secret token for csrf signing
	2025/12/02 20:57:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/02 20:57:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/02 20:57:22 Successful initial request to the apiserver, version: v1.34.2
	2025/12/02 20:57:22 Generating JWE encryption key
	2025/12/02 20:57:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/02 20:57:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/02 20:57:22 Initializing JWE encryption key from synchronized object
	2025/12/02 20:57:22 Creating in-cluster Sidecar client
	2025/12/02 20:57:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 20:57:22 Serving insecurely on HTTP port: 9090
	2025/12/02 20:57:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [b5ab2cecc26850a5fcfdda9460fabe3dfee322129d5a7ffa87daa1e4390a54cb] <==
	I1202 20:57:13.347131       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1202 20:57:43.349798       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fbbb7718121bf867e159d5cd1a6bf1edd51a7b976076819722430cf4282dc5dd] <==
	I1202 20:57:44.160028       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 20:57:44.167914       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 20:57:44.167956       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1202 20:57:44.170199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:57:47.625348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:57:51.885875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:57:55.484712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:57:58.538231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:58:01.561254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:58:01.565805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 20:58:01.565948       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 20:58:01.566096       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"34c56701-7501-4c39-8645-5294da9c60ee", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-386191_00067f6e-056e-4fd5-be99-0ad0554d7df5 became leader
	I1202 20:58:01.566122       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-386191_00067f6e-056e-4fd5-be99-0ad0554d7df5!
	W1202 20:58:01.568119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:58:01.571063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 20:58:01.666460       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-386191_00067f6e-056e-4fd5-be99-0ad0554d7df5!
	W1202 20:58:03.573978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:58:03.578414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:58:05.581746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:58:05.586264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-386191 -n embed-certs-386191
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-386191 -n embed-certs-386191: exit status 2 (341.508192ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-386191 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-386191
helpers_test.go:243: (dbg) docker inspect embed-certs-386191:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "59d0941ced13c8c23df4b796c1700e71da3dff2dc9f33015ee98bb7f9e84e6c9",
	        "Created": "2025-12-02T20:55:55.991908115Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 774433,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T20:57:03.809622421Z",
	            "FinishedAt": "2025-12-02T20:57:02.921670957Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/59d0941ced13c8c23df4b796c1700e71da3dff2dc9f33015ee98bb7f9e84e6c9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/59d0941ced13c8c23df4b796c1700e71da3dff2dc9f33015ee98bb7f9e84e6c9/hostname",
	        "HostsPath": "/var/lib/docker/containers/59d0941ced13c8c23df4b796c1700e71da3dff2dc9f33015ee98bb7f9e84e6c9/hosts",
	        "LogPath": "/var/lib/docker/containers/59d0941ced13c8c23df4b796c1700e71da3dff2dc9f33015ee98bb7f9e84e6c9/59d0941ced13c8c23df4b796c1700e71da3dff2dc9f33015ee98bb7f9e84e6c9-json.log",
	        "Name": "/embed-certs-386191",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-386191:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-386191",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "59d0941ced13c8c23df4b796c1700e71da3dff2dc9f33015ee98bb7f9e84e6c9",
	                "LowerDir": "/var/lib/docker/overlay2/cd263fb850dea457d23961af62640291018121fa574740d96ea92fe99c9aa05c-init/diff:/var/lib/docker/overlay2/49bb44e987885c48600d5ae7ad3c81cad82a00f38070c0460882c5746d9fae59/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd263fb850dea457d23961af62640291018121fa574740d96ea92fe99c9aa05c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd263fb850dea457d23961af62640291018121fa574740d96ea92fe99c9aa05c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd263fb850dea457d23961af62640291018121fa574740d96ea92fe99c9aa05c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-386191",
	                "Source": "/var/lib/docker/volumes/embed-certs-386191/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-386191",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-386191",
	                "name.minikube.sigs.k8s.io": "embed-certs-386191",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d9e4366bed059f165927933f953c2fe4b31778e9683f03f5c4d7f707f537913c",
	            "SandboxKey": "/var/run/docker/netns/d9e4366bed05",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33518"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33519"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33522"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33520"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33521"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-386191": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "09e54ca661ff7c94761e454bfdeba97f3291ada6df7679173f7c9249a52d8235",
	                    "EndpointID": "bb8c77e970185dc84d4c692db55219019e72ca1ab30bf78c163dd98c15856dd2",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "62:16:ac:40:8a:20",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-386191",
	                        "59d0941ced13"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-386191 -n embed-certs-386191
E1202 20:58:06.856735  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/calico-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-386191 -n embed-certs-386191: exit status 2 (334.353091ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-386191 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-386191 logs -n 25: (1.148990032s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ -p newest-cni-245604 --alsologtostderr -v=1                                                                                                                              │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-997805 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p default-k8s-diff-port-997805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2 │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:56 UTC │
	│ delete  │ -p newest-cni-245604                                                                                                                                                     │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ delete  │ -p newest-cni-245604                                                                                                                                                     │ newest-cni-245604            │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ delete  │ -p disable-driver-mounts-234978                                                                                                                                          │ disable-driver-mounts-234978 │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ start   │ -p embed-certs-386191 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                   │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:56 UTC │
	│ image   │ old-k8s-version-992336 image list --format=json                                                                                                                          │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:55 UTC │
	│ pause   │ -p old-k8s-version-992336 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │                     │
	│ delete  │ -p old-k8s-version-992336                                                                                                                                                │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:55 UTC │ 02 Dec 25 20:56 UTC │
	│ delete  │ -p old-k8s-version-992336                                                                                                                                                │ old-k8s-version-992336       │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ image   │ no-preload-336331 image list --format=json                                                                                                                               │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ pause   │ -p no-preload-336331 --alsologtostderr -v=1                                                                                                                              │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │                     │
	│ delete  │ -p no-preload-336331                                                                                                                                                     │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ delete  │ -p no-preload-336331                                                                                                                                                     │ no-preload-336331            │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ addons  │ enable metrics-server -p embed-certs-386191 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │                     │
	│ image   │ default-k8s-diff-port-997805 image list --format=json                                                                                                                    │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ stop    │ -p embed-certs-386191 --alsologtostderr -v=3                                                                                                                             │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:57 UTC │
	│ pause   │ -p default-k8s-diff-port-997805 --alsologtostderr -v=1                                                                                                                   │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-997805                                                                                                                                          │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ delete  │ -p default-k8s-diff-port-997805                                                                                                                                          │ default-k8s-diff-port-997805 │ jenkins │ v1.37.0 │ 02 Dec 25 20:56 UTC │ 02 Dec 25 20:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-386191 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:57 UTC │ 02 Dec 25 20:57 UTC │
	│ start   │ -p embed-certs-386191 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                   │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:57 UTC │ 02 Dec 25 20:57 UTC │
	│ image   │ embed-certs-386191 image list --format=json                                                                                                                              │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:58 UTC │ 02 Dec 25 20:58 UTC │
	│ pause   │ -p embed-certs-386191 --alsologtostderr -v=1                                                                                                                             │ embed-certs-386191           │ jenkins │ v1.37.0 │ 02 Dec 25 20:58 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:57:03
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:57:03.568914  774232 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:57:03.569228  774232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:57:03.569238  774232 out.go:374] Setting ErrFile to fd 2...
	I1202 20:57:03.569243  774232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:57:03.569469  774232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:57:03.569936  774232 out.go:368] Setting JSON to false
	I1202 20:57:03.571099  774232 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9568,"bootTime":1764699456,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:57:03.571176  774232 start.go:143] virtualization: kvm guest
	I1202 20:57:03.574199  774232 out.go:179] * [embed-certs-386191] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 20:57:03.575811  774232 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:57:03.575844  774232 notify.go:221] Checking for updates...
	I1202 20:57:03.578362  774232 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:57:03.580018  774232 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:57:03.584349  774232 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 20:57:03.585795  774232 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:57:03.587098  774232 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:57:03.588892  774232 config.go:182] Loaded profile config "embed-certs-386191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:57:03.589445  774232 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:57:03.613212  774232 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 20:57:03.613340  774232 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:57:03.672046  774232 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:44 SystemTime:2025-12-02 20:57:03.662081286 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:57:03.672188  774232 docker.go:319] overlay module found
	I1202 20:57:03.674861  774232 out.go:179] * Using the docker driver based on existing profile
	I1202 20:57:03.676435  774232 start.go:309] selected driver: docker
	I1202 20:57:03.676454  774232 start.go:927] validating driver "docker" against &{Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:57:03.676549  774232 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:57:03.677183  774232 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:57:03.734034  774232 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:44 SystemTime:2025-12-02 20:57:03.724613595 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:57:03.734373  774232 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:57:03.734411  774232 cni.go:84] Creating CNI manager for ""
	I1202 20:57:03.734477  774232 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:57:03.734515  774232 start.go:353] cluster config:
	{Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:57:03.736340  774232 out.go:179] * Starting "embed-certs-386191" primary control-plane node in "embed-certs-386191" cluster
	I1202 20:57:03.737738  774232 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 20:57:03.739334  774232 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 20:57:03.740648  774232 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:57:03.740693  774232 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 20:57:03.740696  774232 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 20:57:03.740735  774232 cache.go:65] Caching tarball of preloaded images
	I1202 20:57:03.740883  774232 preload.go:238] Found /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 20:57:03.740914  774232 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 20:57:03.741042  774232 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/config.json ...
	I1202 20:57:03.762164  774232 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 20:57:03.762185  774232 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 20:57:03.762202  774232 cache.go:243] Successfully downloaded all kic artifacts
	I1202 20:57:03.762237  774232 start.go:360] acquireMachinesLock for embed-certs-386191: {Name:mk07b451c8d7193712ed79603183bf03b141f2ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:57:03.762295  774232 start.go:364] duration metric: took 38.667µs to acquireMachinesLock for "embed-certs-386191"
	I1202 20:57:03.762311  774232 start.go:96] Skipping create...Using existing machine configuration
	I1202 20:57:03.762318  774232 fix.go:54] fixHost starting: 
	I1202 20:57:03.762528  774232 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:57:03.781195  774232 fix.go:112] recreateIfNeeded on embed-certs-386191: state=Stopped err=<nil>
	W1202 20:57:03.781249  774232 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 20:57:03.783099  774232 out.go:252] * Restarting existing docker container for "embed-certs-386191" ...
	I1202 20:57:03.783194  774232 cli_runner.go:164] Run: docker start embed-certs-386191
	I1202 20:57:04.040060  774232 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:57:04.060513  774232 kic.go:430] container "embed-certs-386191" state is running.
	I1202 20:57:04.060964  774232 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-386191
	I1202 20:57:04.080024  774232 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/config.json ...
	I1202 20:57:04.080418  774232 machine.go:94] provisionDockerMachine start ...
	I1202 20:57:04.080523  774232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:57:04.100700  774232 main.go:143] libmachine: Using SSH client type: native
	I1202 20:57:04.101007  774232 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I1202 20:57:04.101024  774232 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:57:04.101723  774232 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34366->127.0.0.1:33518: read: connection reset by peer
	I1202 20:57:07.244692  774232 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-386191
	
	I1202 20:57:07.244732  774232 ubuntu.go:182] provisioning hostname "embed-certs-386191"
	I1202 20:57:07.244811  774232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:57:07.264773  774232 main.go:143] libmachine: Using SSH client type: native
	I1202 20:57:07.265037  774232 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I1202 20:57:07.265057  774232 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-386191 && echo "embed-certs-386191" | sudo tee /etc/hostname
	I1202 20:57:07.417302  774232 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-386191
	
	I1202 20:57:07.417376  774232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:57:07.436562  774232 main.go:143] libmachine: Using SSH client type: native
	I1202 20:57:07.436797  774232 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I1202 20:57:07.436815  774232 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-386191' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-386191/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-386191' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:57:07.579568  774232 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:57:07.579608  774232 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-407427/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-407427/.minikube}
	I1202 20:57:07.579633  774232 ubuntu.go:190] setting up certificates
	I1202 20:57:07.579646  774232 provision.go:84] configureAuth start
	I1202 20:57:07.579715  774232 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-386191
	I1202 20:57:07.599317  774232 provision.go:143] copyHostCerts
	I1202 20:57:07.599396  774232 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem, removing ...
	I1202 20:57:07.599412  774232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem
	I1202 20:57:07.599511  774232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/cert.pem (1123 bytes)
	I1202 20:57:07.599683  774232 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem, removing ...
	I1202 20:57:07.599697  774232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem
	I1202 20:57:07.599755  774232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/key.pem (1675 bytes)
	I1202 20:57:07.599859  774232 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem, removing ...
	I1202 20:57:07.599868  774232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem
	I1202 20:57:07.599960  774232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-407427/.minikube/ca.pem (1082 bytes)
	I1202 20:57:07.600081  774232 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem org=jenkins.embed-certs-386191 san=[127.0.0.1 192.168.103.2 embed-certs-386191 localhost minikube]
	I1202 20:57:07.648058  774232 provision.go:177] copyRemoteCerts
	I1202 20:57:07.648157  774232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:57:07.648228  774232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:57:07.667174  774232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:57:07.768998  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:57:07.788122  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 20:57:07.807387  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 20:57:07.826355  774232 provision.go:87] duration metric: took 246.694362ms to configureAuth
	I1202 20:57:07.826383  774232 ubuntu.go:206] setting minikube options for container-runtime
	I1202 20:57:07.826543  774232 config.go:182] Loaded profile config "embed-certs-386191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:57:07.826653  774232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:57:07.845598  774232 main.go:143] libmachine: Using SSH client type: native
	I1202 20:57:07.845893  774232 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I1202 20:57:07.845910  774232 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:57:08.184581  774232 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:57:08.184613  774232 machine.go:97] duration metric: took 4.104173799s to provisionDockerMachine
	I1202 20:57:08.184630  774232 start.go:293] postStartSetup for "embed-certs-386191" (driver="docker")
	I1202 20:57:08.184645  774232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:57:08.184730  774232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:57:08.184795  774232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:57:08.204944  774232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:57:08.305914  774232 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:57:08.309766  774232 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:57:08.309809  774232 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:57:08.309823  774232 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/addons for local assets ...
	I1202 20:57:08.309877  774232 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-407427/.minikube/files for local assets ...
	I1202 20:57:08.309985  774232 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem -> 4110322.pem in /etc/ssl/certs
	I1202 20:57:08.310090  774232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:57:08.318141  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:57:08.336773  774232 start.go:296] duration metric: took 152.121205ms for postStartSetup
	I1202 20:57:08.336865  774232 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:57:08.336915  774232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:57:08.356216  774232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:57:08.454737  774232 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:57:08.459772  774232 fix.go:56] duration metric: took 4.697444869s for fixHost
	I1202 20:57:08.459799  774232 start.go:83] releasing machines lock for "embed-certs-386191", held for 4.697494598s
	I1202 20:57:08.459884  774232 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-386191
	I1202 20:57:08.478583  774232 ssh_runner.go:195] Run: cat /version.json
	I1202 20:57:08.478654  774232 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:57:08.478674  774232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:57:08.478721  774232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:57:08.498926  774232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:57:08.499314  774232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:57:08.652660  774232 ssh_runner.go:195] Run: systemctl --version
	I1202 20:57:08.659775  774232 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:57:08.696845  774232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:57:08.701831  774232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:57:08.701946  774232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:57:08.710279  774232 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 20:57:08.710306  774232 start.go:496] detecting cgroup driver to use...
	I1202 20:57:08.710340  774232 detect.go:190] detected "systemd" cgroup driver on host os
	I1202 20:57:08.710421  774232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:57:08.725534  774232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:57:08.739105  774232 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:57:08.739195  774232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:57:08.754305  774232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:57:08.768174  774232 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:57:08.849581  774232 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:57:08.929908  774232 docker.go:234] disabling docker service ...
	I1202 20:57:08.929985  774232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:57:08.944745  774232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:57:08.958289  774232 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:57:09.040440  774232 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:57:09.121679  774232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:57:09.135343  774232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:57:09.150391  774232 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:57:09.150454  774232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:57:09.160229  774232 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1202 20:57:09.160309  774232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:57:09.169758  774232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:57:09.179178  774232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:57:09.188470  774232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:57:09.197290  774232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:57:09.207458  774232 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:57:09.216903  774232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:57:09.226360  774232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:57:09.234532  774232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:57:09.242859  774232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:57:09.326255  774232 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:57:09.470049  774232 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:57:09.470136  774232 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:57:09.474392  774232 start.go:564] Will wait 60s for crictl version
	I1202 20:57:09.474451  774232 ssh_runner.go:195] Run: which crictl
	I1202 20:57:09.478445  774232 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:57:09.504698  774232 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:57:09.504770  774232 ssh_runner.go:195] Run: crio --version
	I1202 20:57:09.534114  774232 ssh_runner.go:195] Run: crio --version
	I1202 20:57:09.566764  774232 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 20:57:09.568218  774232 cli_runner.go:164] Run: docker network inspect embed-certs-386191 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:57:09.587321  774232 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1202 20:57:09.592089  774232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:57:09.604537  774232 kubeadm.go:884] updating cluster {Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:57:09.604663  774232 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:57:09.604705  774232 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:57:09.638386  774232 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:57:09.638408  774232 crio.go:433] Images already preloaded, skipping extraction
	I1202 20:57:09.638469  774232 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:57:09.664533  774232 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:57:09.664556  774232 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:57:09.664564  774232 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1202 20:57:09.664668  774232 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-386191 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:57:09.664730  774232 ssh_runner.go:195] Run: crio config
	I1202 20:57:09.712997  774232 cni.go:84] Creating CNI manager for ""
	I1202 20:57:09.713020  774232 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:57:09.713040  774232 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:57:09.713063  774232 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-386191 NodeName:embed-certs-386191 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:57:09.713233  774232 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-386191"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:57:09.713300  774232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 20:57:09.721845  774232 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:57:09.721930  774232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:57:09.730141  774232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1202 20:57:09.743834  774232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:57:09.757119  774232 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1202 20:57:09.770727  774232 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1202 20:57:09.774934  774232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:57:09.785672  774232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:57:09.865745  774232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:57:09.889834  774232 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191 for IP: 192.168.103.2
	I1202 20:57:09.889866  774232 certs.go:195] generating shared ca certs ...
	I1202 20:57:09.889885  774232 certs.go:227] acquiring lock for ca certs: {Name:mkea11924193dd4dc459e16d70207bde383196df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:57:09.890103  774232 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key
	I1202 20:57:09.890169  774232 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key
	I1202 20:57:09.890182  774232 certs.go:257] generating profile certs ...
	I1202 20:57:09.890312  774232 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/client.key
	I1202 20:57:09.890401  774232 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key.1b423d29
	I1202 20:57:09.890456  774232 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.key
	I1202 20:57:09.890593  774232 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem (1338 bytes)
	W1202 20:57:09.890638  774232 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032_empty.pem, impossibly tiny 0 bytes
	I1202 20:57:09.890652  774232 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca-key.pem (1679 bytes)
	I1202 20:57:09.890692  774232 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:57:09.890723  774232 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:57:09.890768  774232 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/certs/key.pem (1675 bytes)
	I1202 20:57:09.890824  774232 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem (1708 bytes)
	I1202 20:57:09.891720  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:57:09.911874  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:57:09.933174  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:57:09.954654  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:57:09.980887  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1202 20:57:10.000742  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 20:57:10.020847  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:57:10.039958  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/embed-certs-386191/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 20:57:10.059657  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:57:10.078374  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/certs/411032.pem --> /usr/share/ca-certificates/411032.pem (1338 bytes)
	I1202 20:57:10.098039  774232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/ssl/certs/4110322.pem --> /usr/share/ca-certificates/4110322.pem (1708 bytes)
	I1202 20:57:10.116307  774232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:57:10.130357  774232 ssh_runner.go:195] Run: openssl version
	I1202 20:57:10.137246  774232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:57:10.146555  774232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:57:10.150803  774232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:57:10.150871  774232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:57:10.186704  774232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:57:10.195739  774232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/411032.pem && ln -fs /usr/share/ca-certificates/411032.pem /etc/ssl/certs/411032.pem"
	I1202 20:57:10.206054  774232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/411032.pem
	I1202 20:57:10.210403  774232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 20:13 /usr/share/ca-certificates/411032.pem
	I1202 20:57:10.210459  774232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/411032.pem
	I1202 20:57:10.244732  774232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/411032.pem /etc/ssl/certs/51391683.0"
	I1202 20:57:10.253457  774232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110322.pem && ln -fs /usr/share/ca-certificates/4110322.pem /etc/ssl/certs/4110322.pem"
	I1202 20:57:10.262832  774232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110322.pem
	I1202 20:57:10.267196  774232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 20:13 /usr/share/ca-certificates/4110322.pem
	I1202 20:57:10.267281  774232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110322.pem
	I1202 20:57:10.303119  774232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4110322.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:57:10.312123  774232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:57:10.316373  774232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 20:57:10.351651  774232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 20:57:10.388448  774232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 20:57:10.435290  774232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 20:57:10.482746  774232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 20:57:10.536454  774232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 20:57:10.595716  774232 kubeadm.go:401] StartCluster: {Name:embed-certs-386191 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-386191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:57:10.595861  774232 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:57:10.595945  774232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:57:10.627116  774232 cri.go:89] found id: "977a5d34d10349633d8b109d327cf440d676aa5501596ec9742db0005680b6ea"
	I1202 20:57:10.627141  774232 cri.go:89] found id: "7bbd6132314dd50edb345c367cfd40b9555ce01487136490278226bf20c9869c"
	I1202 20:57:10.627147  774232 cri.go:89] found id: "bb42ebc0538d2d4002108a87aba40e3d0ac601e9d3e24c09df1bd4436d20d164"
	I1202 20:57:10.627152  774232 cri.go:89] found id: "2d91f220a3e5c81f5d5d8cdae53244fd20a45b32d3e9cef96c94d22f621da68c"
	I1202 20:57:10.627155  774232 cri.go:89] found id: ""
	I1202 20:57:10.627205  774232 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 20:57:10.639688  774232 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:57:10Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:57:10.639772  774232 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:57:10.648943  774232 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 20:57:10.648966  774232 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 20:57:10.649019  774232 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 20:57:10.657586  774232 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:57:10.658006  774232 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-386191" does not appear in /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:57:10.658141  774232 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-407427/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-386191" cluster setting kubeconfig missing "embed-certs-386191" context setting]
	I1202 20:57:10.658440  774232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:57:10.659671  774232 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 20:57:10.668787  774232 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1202 20:57:10.668831  774232 kubeadm.go:602] duration metric: took 19.85856ms to restartPrimaryControlPlane
	I1202 20:57:10.668844  774232 kubeadm.go:403] duration metric: took 73.144155ms to StartCluster
	I1202 20:57:10.668866  774232 settings.go:142] acquiring lock: {Name:mk79858205e0c110b31d2645b49097e349ff37c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:57:10.668947  774232 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:57:10.670108  774232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-407427/kubeconfig: {Name:mk71769b01652992a8aaa239aae4f5c38b3500e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:57:10.670362  774232 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:57:10.670438  774232 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:57:10.670547  774232 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-386191"
	I1202 20:57:10.670574  774232 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-386191"
	W1202 20:57:10.670586  774232 addons.go:248] addon storage-provisioner should already be in state true
	I1202 20:57:10.670594  774232 addons.go:70] Setting dashboard=true in profile "embed-certs-386191"
	I1202 20:57:10.670613  774232 addons.go:239] Setting addon dashboard=true in "embed-certs-386191"
	W1202 20:57:10.670621  774232 addons.go:248] addon dashboard should already be in state true
	I1202 20:57:10.670621  774232 host.go:66] Checking if "embed-certs-386191" exists ...
	I1202 20:57:10.670619  774232 config.go:182] Loaded profile config "embed-certs-386191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:57:10.670627  774232 addons.go:70] Setting default-storageclass=true in profile "embed-certs-386191"
	I1202 20:57:10.670645  774232 host.go:66] Checking if "embed-certs-386191" exists ...
	I1202 20:57:10.670663  774232 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-386191"
	I1202 20:57:10.671045  774232 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:57:10.671190  774232 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:57:10.671213  774232 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:57:10.672419  774232 out.go:179] * Verifying Kubernetes components...
	I1202 20:57:10.673823  774232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:57:10.699057  774232 addons.go:239] Setting addon default-storageclass=true in "embed-certs-386191"
	W1202 20:57:10.699103  774232 addons.go:248] addon default-storageclass should already be in state true
	I1202 20:57:10.699138  774232 host.go:66] Checking if "embed-certs-386191" exists ...
	I1202 20:57:10.699784  774232 cli_runner.go:164] Run: docker container inspect embed-certs-386191 --format={{.State.Status}}
	I1202 20:57:10.699783  774232 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 20:57:10.700586  774232 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:57:10.704270  774232 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:57:10.704292  774232 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:57:10.704299  774232 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 20:57:10.704357  774232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:57:10.706434  774232 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 20:57:10.706458  774232 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 20:57:10.706512  774232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:57:10.729620  774232 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:57:10.729645  774232 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:57:10.729717  774232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-386191
	I1202 20:57:10.740552  774232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:57:10.741769  774232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:57:10.757420  774232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/embed-certs-386191/id_rsa Username:docker}
	I1202 20:57:10.835616  774232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:57:10.851324  774232 node_ready.go:35] waiting up to 6m0s for node "embed-certs-386191" to be "Ready" ...
	I1202 20:57:10.858797  774232 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 20:57:10.858823  774232 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 20:57:10.859643  774232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:57:10.870825  774232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:57:10.875378  774232 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 20:57:10.875407  774232 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 20:57:10.892562  774232 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 20:57:10.892708  774232 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 20:57:10.909628  774232 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 20:57:10.909654  774232 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 20:57:10.928318  774232 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 20:57:10.928351  774232 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 20:57:10.943558  774232 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 20:57:10.943584  774232 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 20:57:10.958869  774232 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 20:57:10.958898  774232 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 20:57:10.974913  774232 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 20:57:10.974944  774232 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 20:57:10.987816  774232 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:57:10.987849  774232 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 20:57:11.004038  774232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 20:57:12.172221  774232 node_ready.go:49] node "embed-certs-386191" is "Ready"
	I1202 20:57:12.172281  774232 node_ready.go:38] duration metric: took 1.320900145s for node "embed-certs-386191" to be "Ready" ...
	I1202 20:57:12.172300  774232 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:57:12.172360  774232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:57:12.704939  774232 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.84526039s)
	I1202 20:57:12.705016  774232 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.834127091s)
	I1202 20:57:12.705189  774232 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.701112937s)
	I1202 20:57:12.705251  774232 api_server.go:72] duration metric: took 2.034851649s to wait for apiserver process to appear ...
	I1202 20:57:12.705408  774232 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:57:12.705427  774232 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 20:57:12.707214  774232 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-386191 addons enable metrics-server
	
	I1202 20:57:12.712812  774232 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 20:57:12.712839  774232 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 20:57:12.718943  774232 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1202 20:57:12.720446  774232 addons.go:530] duration metric: took 2.050020615s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1202 20:57:13.205727  774232 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 20:57:13.211193  774232 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 20:57:13.211231  774232 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 20:57:13.706040  774232 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1202 20:57:13.711063  774232 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1202 20:57:13.712524  774232 api_server.go:141] control plane version: v1.34.2
	I1202 20:57:13.712558  774232 api_server.go:131] duration metric: took 1.007141254s to wait for apiserver health ...
	I1202 20:57:13.712570  774232 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:57:13.716513  774232 system_pods.go:59] 8 kube-system pods found
	I1202 20:57:13.716559  774232 system_pods.go:61] "coredns-66bc5c9577-q6l9x" [e7159eb1-3cde-437a-99e3-760c9c397977] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:57:13.716572  774232 system_pods.go:61] "etcd-embed-certs-386191" [5ca26226-7c69-4d6b-a513-2187c089d96c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:57:13.716584  774232 system_pods.go:61] "kindnet-x9jsh" [410369de-877d-46e5-8f7c-cd8076d1d2f5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 20:57:13.716604  774232 system_pods.go:61] "kube-apiserver-embed-certs-386191" [dd2a96bf-6afe-43b8-b01d-a88496c7c9b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:57:13.716611  774232 system_pods.go:61] "kube-controller-manager-embed-certs-386191" [ba51ea44-285e-494e-a173-75f314cb6a5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:57:13.716619  774232 system_pods.go:61] "kube-proxy-854r8" [6c9652b0-217c-466f-9345-7364f0e39936] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 20:57:13.716627  774232 system_pods.go:61] "kube-scheduler-embed-certs-386191" [abe313cb-061b-4d65-b0e0-f369aacdc1c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:57:13.716632  774232 system_pods.go:61] "storage-provisioner" [d37e55bb-bb1f-4659-a9c5-14d47011bd23] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:57:13.716641  774232 system_pods.go:74] duration metric: took 4.063952ms to wait for pod list to return data ...
	I1202 20:57:13.716653  774232 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:57:13.719337  774232 default_sa.go:45] found service account: "default"
	I1202 20:57:13.719363  774232 default_sa.go:55] duration metric: took 2.699939ms for default service account to be created ...
	I1202 20:57:13.719375  774232 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 20:57:13.722423  774232 system_pods.go:86] 8 kube-system pods found
	I1202 20:57:13.722455  774232 system_pods.go:89] "coredns-66bc5c9577-q6l9x" [e7159eb1-3cde-437a-99e3-760c9c397977] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:57:13.722463  774232 system_pods.go:89] "etcd-embed-certs-386191" [5ca26226-7c69-4d6b-a513-2187c089d96c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:57:13.722471  774232 system_pods.go:89] "kindnet-x9jsh" [410369de-877d-46e5-8f7c-cd8076d1d2f5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 20:57:13.722485  774232 system_pods.go:89] "kube-apiserver-embed-certs-386191" [dd2a96bf-6afe-43b8-b01d-a88496c7c9b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:57:13.722497  774232 system_pods.go:89] "kube-controller-manager-embed-certs-386191" [ba51ea44-285e-494e-a173-75f314cb6a5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:57:13.722503  774232 system_pods.go:89] "kube-proxy-854r8" [6c9652b0-217c-466f-9345-7364f0e39936] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 20:57:13.722510  774232 system_pods.go:89] "kube-scheduler-embed-certs-386191" [abe313cb-061b-4d65-b0e0-f369aacdc1c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:57:13.722515  774232 system_pods.go:89] "storage-provisioner" [d37e55bb-bb1f-4659-a9c5-14d47011bd23] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:57:13.722527  774232 system_pods.go:126] duration metric: took 3.142689ms to wait for k8s-apps to be running ...
	I1202 20:57:13.722536  774232 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 20:57:13.722580  774232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:57:13.736470  774232 system_svc.go:56] duration metric: took 13.924054ms WaitForService to wait for kubelet
	I1202 20:57:13.736499  774232 kubeadm.go:587] duration metric: took 3.066103339s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:57:13.736524  774232 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:57:13.739874  774232 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 20:57:13.739902  774232 node_conditions.go:123] node cpu capacity is 8
	I1202 20:57:13.739915  774232 node_conditions.go:105] duration metric: took 3.385954ms to run NodePressure ...
	I1202 20:57:13.739928  774232 start.go:242] waiting for startup goroutines ...
	I1202 20:57:13.739939  774232 start.go:247] waiting for cluster config update ...
	I1202 20:57:13.739952  774232 start.go:256] writing updated cluster config ...
	I1202 20:57:13.740326  774232 ssh_runner.go:195] Run: rm -f paused
	I1202 20:57:13.744613  774232 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:57:13.748215  774232 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q6l9x" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 20:57:15.754394  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:17.756805  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:20.254748  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:22.754847  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:24.754901  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:27.254021  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:29.254601  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:31.754661  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:34.255139  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:36.753491  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:38.754840  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:40.756205  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:43.253901  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:45.254849  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	W1202 20:57:47.754159  774232 pod_ready.go:104] pod "coredns-66bc5c9577-q6l9x" is not "Ready", error: <nil>
	I1202 20:57:49.254583  774232 pod_ready.go:94] pod "coredns-66bc5c9577-q6l9x" is "Ready"
	I1202 20:57:49.254616  774232 pod_ready.go:86] duration metric: took 35.506377539s for pod "coredns-66bc5c9577-q6l9x" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:57:49.257369  774232 pod_ready.go:83] waiting for pod "etcd-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:57:49.261904  774232 pod_ready.go:94] pod "etcd-embed-certs-386191" is "Ready"
	I1202 20:57:49.261934  774232 pod_ready.go:86] duration metric: took 4.541022ms for pod "etcd-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:57:49.264267  774232 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:57:49.268801  774232 pod_ready.go:94] pod "kube-apiserver-embed-certs-386191" is "Ready"
	I1202 20:57:49.268959  774232 pod_ready.go:86] duration metric: took 4.661362ms for pod "kube-apiserver-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:57:49.271959  774232 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:57:49.453416  774232 pod_ready.go:94] pod "kube-controller-manager-embed-certs-386191" is "Ready"
	I1202 20:57:49.453449  774232 pod_ready.go:86] duration metric: took 181.463804ms for pod "kube-controller-manager-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:57:49.652760  774232 pod_ready.go:83] waiting for pod "kube-proxy-854r8" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:57:50.053271  774232 pod_ready.go:94] pod "kube-proxy-854r8" is "Ready"
	I1202 20:57:50.053306  774232 pod_ready.go:86] duration metric: took 400.519526ms for pod "kube-proxy-854r8" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:57:50.252936  774232 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:57:50.652636  774232 pod_ready.go:94] pod "kube-scheduler-embed-certs-386191" is "Ready"
	I1202 20:57:50.652666  774232 pod_ready.go:86] duration metric: took 399.70043ms for pod "kube-scheduler-embed-certs-386191" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:57:50.652680  774232 pod_ready.go:40] duration metric: took 36.908030477s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:57:50.696685  774232 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 20:57:50.698749  774232 out.go:179] * Done! kubectl is now configured to use "embed-certs-386191" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 20:57:23 embed-certs-386191 crio[575]: time="2025-12-02T20:57:23.825651656Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 20:57:23 embed-certs-386191 crio[575]: time="2025-12-02T20:57:23.829389193Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 20:57:23 embed-certs-386191 crio[575]: time="2025-12-02T20:57:23.829417807Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 20:57:39 embed-certs-386191 crio[575]: time="2025-12-02T20:57:39.984375206Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d301a522-aac8-495a-a658-cac992397cbd name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:57:39 embed-certs-386191 crio[575]: time="2025-12-02T20:57:39.985417968Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e73569d3-0951-42e8-ae36-3f65bc8c98f7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:57:39 embed-certs-386191 crio[575]: time="2025-12-02T20:57:39.986430445Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfkqk/dashboard-metrics-scraper" id=f4db0d92-9ebc-4b60-a362-09f7381d017d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:57:39 embed-certs-386191 crio[575]: time="2025-12-02T20:57:39.986569952Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:57:39 embed-certs-386191 crio[575]: time="2025-12-02T20:57:39.991677319Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:57:39 embed-certs-386191 crio[575]: time="2025-12-02T20:57:39.992284782Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:57:40 embed-certs-386191 crio[575]: time="2025-12-02T20:57:40.025597955Z" level=info msg="Created container 8a430412e5cdd121d367b7b4d53b1fa49127fabd0127bc78bee44ec9f14c657b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfkqk/dashboard-metrics-scraper" id=f4db0d92-9ebc-4b60-a362-09f7381d017d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:57:40 embed-certs-386191 crio[575]: time="2025-12-02T20:57:40.02651668Z" level=info msg="Starting container: 8a430412e5cdd121d367b7b4d53b1fa49127fabd0127bc78bee44ec9f14c657b" id=bf411397-cc2c-4e98-861e-cfee396a835b name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:57:40 embed-certs-386191 crio[575]: time="2025-12-02T20:57:40.028521177Z" level=info msg="Started container" PID=1760 containerID=8a430412e5cdd121d367b7b4d53b1fa49127fabd0127bc78bee44ec9f14c657b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfkqk/dashboard-metrics-scraper id=bf411397-cc2c-4e98-861e-cfee396a835b name=/runtime.v1.RuntimeService/StartContainer sandboxID=01e6db61986fab83443cc55fad85a1f9f1bfdbe21c74b1eec97433a68fd702f2
	Dec 02 20:57:40 embed-certs-386191 crio[575]: time="2025-12-02T20:57:40.096458732Z" level=info msg="Removing container: eb2020f7201b4a1980049db1cf35098ef46e7e67661b129617368e9376bf461c" id=d30af833-3f90-4def-bf3e-d91c145723bc name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 20:57:40 embed-certs-386191 crio[575]: time="2025-12-02T20:57:40.107476084Z" level=info msg="Removed container eb2020f7201b4a1980049db1cf35098ef46e7e67661b129617368e9376bf461c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfkqk/dashboard-metrics-scraper" id=d30af833-3f90-4def-bf3e-d91c145723bc name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 20:57:44 embed-certs-386191 crio[575]: time="2025-12-02T20:57:44.108358Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=83d07edb-5a83-4fd1-a115-02b7bc152467 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:57:44 embed-certs-386191 crio[575]: time="2025-12-02T20:57:44.109457879Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a1144362-ea78-444d-9b52-1133120da854 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:57:44 embed-certs-386191 crio[575]: time="2025-12-02T20:57:44.11074607Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=76b8de18-d6d7-46a2-a855-276c4ea7403f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:57:44 embed-certs-386191 crio[575]: time="2025-12-02T20:57:44.110894547Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:57:44 embed-certs-386191 crio[575]: time="2025-12-02T20:57:44.115357242Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:57:44 embed-certs-386191 crio[575]: time="2025-12-02T20:57:44.115516239Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ba1318178a3df6e0c4a5c4b264cae8c03e300763c1e21e97ecc298030e5fb2a2/merged/etc/passwd: no such file or directory"
	Dec 02 20:57:44 embed-certs-386191 crio[575]: time="2025-12-02T20:57:44.115534099Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ba1318178a3df6e0c4a5c4b264cae8c03e300763c1e21e97ecc298030e5fb2a2/merged/etc/group: no such file or directory"
	Dec 02 20:57:44 embed-certs-386191 crio[575]: time="2025-12-02T20:57:44.115802479Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 20:57:44 embed-certs-386191 crio[575]: time="2025-12-02T20:57:44.144736192Z" level=info msg="Created container fbbb7718121bf867e159d5cd1a6bf1edd51a7b976076819722430cf4282dc5dd: kube-system/storage-provisioner/storage-provisioner" id=76b8de18-d6d7-46a2-a855-276c4ea7403f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:57:44 embed-certs-386191 crio[575]: time="2025-12-02T20:57:44.145560245Z" level=info msg="Starting container: fbbb7718121bf867e159d5cd1a6bf1edd51a7b976076819722430cf4282dc5dd" id=3be3a5f9-1b4b-4a0e-ab53-0d73bd70359b name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:57:44 embed-certs-386191 crio[575]: time="2025-12-02T20:57:44.147456367Z" level=info msg="Started container" PID=1774 containerID=fbbb7718121bf867e159d5cd1a6bf1edd51a7b976076819722430cf4282dc5dd description=kube-system/storage-provisioner/storage-provisioner id=3be3a5f9-1b4b-4a0e-ab53-0d73bd70359b name=/runtime.v1.RuntimeService/StartContainer sandboxID=d1bc9f770cc98ee34b17735bf561cf361e0d0b1495d891d1ab151351c9dbf394
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	fbbb7718121bf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   d1bc9f770cc98       storage-provisioner                          kube-system
	8a430412e5cdd       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago      Exited              dashboard-metrics-scraper   2                   01e6db61986fa       dashboard-metrics-scraper-6ffb444bf9-wfkqk   kubernetes-dashboard
	3ca826e1199be       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   a407dc1ce9ba8       kubernetes-dashboard-855c9754f9-zkxsp        kubernetes-dashboard
	fe0d3771d27b3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   4ef439d163c3e       coredns-66bc5c9577-q6l9x                     kube-system
	da9e328e7a69d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   87413d3618010       busybox                                      default
	8c1d49372b9e2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   50479fcfafba4       kindnet-x9jsh                                kube-system
	3e2d26d4dcdd3       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           54 seconds ago      Running             kube-proxy                  0                   a7ec21b6ece66       kube-proxy-854r8                             kube-system
	b5ab2cecc2685       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   d1bc9f770cc98       storage-provisioner                          kube-system
	977a5d34d1034       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           57 seconds ago      Running             kube-apiserver              0                   f6dd2712e995d       kube-apiserver-embed-certs-386191            kube-system
	7bbd6132314dd       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           57 seconds ago      Running             kube-controller-manager     0                   b45cc06e5246c       kube-controller-manager-embed-certs-386191   kube-system
	bb42ebc0538d2       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           57 seconds ago      Running             etcd                        0                   050fe7b3adcf1       etcd-embed-certs-386191                      kube-system
	2d91f220a3e5c       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           57 seconds ago      Running             kube-scheduler              0                   a1d3646f4ea8d       kube-scheduler-embed-certs-386191            kube-system
	
	
	==> coredns [fe0d3771d27b3c21233deb323722886a95d260e7637dc80599a483422f200d04] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37930 - 2031 "HINFO IN 1010263415870260272.6391674541796422016. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026700371s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-386191
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-386191
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=embed-certs-386191
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T20_56_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 20:56:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-386191
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:57:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:57:42 +0000   Tue, 02 Dec 2025 20:56:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:57:42 +0000   Tue, 02 Dec 2025 20:56:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:57:42 +0000   Tue, 02 Dec 2025 20:56:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:57:42 +0000   Tue, 02 Dec 2025 20:56:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-386191
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                f83f142d-7c61-4329-95b4-56ae3cea973b
	  Boot ID:                    9dd5456f-e394-4b4b-9458-48d26faf507e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-q6l9x                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-embed-certs-386191                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-x9jsh                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-embed-certs-386191             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-embed-certs-386191    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-854r8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-embed-certs-386191             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wfkqk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zkxsp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node embed-certs-386191 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node embed-certs-386191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node embed-certs-386191 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node embed-certs-386191 event: Registered Node embed-certs-386191 in Controller
	  Normal  NodeReady                97s                kubelet          Node embed-certs-386191 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 58s)  kubelet          Node embed-certs-386191 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 58s)  kubelet          Node embed-certs-386191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 58s)  kubelet          Node embed-certs-386191 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                node-controller  Node embed-certs-386191 event: Registered Node embed-certs-386191 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[ +32.254238] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a6 a8 6c 2c e4 fb 66 95 0d 9f b9 e6 08 00
	[Dec 2 20:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 03 bd 14 45 8a 08 06
	[  +0.000590] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 96 27 ad 0d 40 04 08 06
	[Dec 2 20:53] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	[  +0.000700] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 e4 ba c0 78 5f 08 06
	[ +10.119645] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000022] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[  +2.447166] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 df 09 53 d6 6e 08 06
	[  +0.000374] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 8d 06 71 0a 5e 08 06
	[Dec 2 20:54] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 12 47 13 50 f6 bc 08 06
	[  +0.001523] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 15 d8 88 c9 57 08 06
	[ +22.123549] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 0d 45 06 42 2a 08 06
	[  +0.000389] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 d8 4f 6f 7c f7 08 06
	
	
	==> etcd [bb42ebc0538d2d4002108a87aba40e3d0ac601e9d3e24c09df1bd4436d20d164] <==
	{"level":"warn","ts":"2025-12-02T20:57:11.535154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.542092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.552304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.559446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.567131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.574371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.581187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.588176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.594864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.603162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.610280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.617730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.625164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.633656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.648313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.656728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.665222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.673351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.681101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.688306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.706371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.710238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.717260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.724526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:57:11.773044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59810","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:58:07 up  2:40,  0 user,  load average: 1.58, 3.27, 2.56
	Linux embed-certs-386191 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8c1d49372b9e24a9b37e0b6123939de40a38bc39fb2d3b737f65fd8154b00adb] <==
	I1202 20:57:13.511354       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 20:57:13.511585       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1202 20:57:13.511752       1 main.go:148] setting mtu 1500 for CNI 
	I1202 20:57:13.511768       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 20:57:13.511788       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T20:57:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 20:57:13.811845       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 20:57:13.812319       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 20:57:13.812344       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 20:57:13.812507       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 20:57:14.307157       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 20:57:14.307458       1 metrics.go:72] Registering metrics
	I1202 20:57:14.307537       1 controller.go:711] "Syncing nftables rules"
	I1202 20:57:23.812734       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1202 20:57:23.812813       1 main.go:301] handling current node
	I1202 20:57:33.815418       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1202 20:57:33.815465       1 main.go:301] handling current node
	I1202 20:57:43.812289       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1202 20:57:43.812341       1 main.go:301] handling current node
	I1202 20:57:53.815844       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1202 20:57:53.815881       1 main.go:301] handling current node
	I1202 20:58:03.820149       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1202 20:58:03.820184       1 main.go:301] handling current node
	
	
	==> kube-apiserver [977a5d34d10349633d8b109d327cf440d676aa5501596ec9742db0005680b6ea] <==
	I1202 20:57:12.242623       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1202 20:57:12.242631       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 20:57:12.242693       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1202 20:57:12.242742       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1202 20:57:12.242779       1 aggregator.go:171] initial CRD sync complete...
	I1202 20:57:12.242788       1 autoregister_controller.go:144] Starting autoregister controller
	I1202 20:57:12.242796       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 20:57:12.242804       1 cache.go:39] Caches are synced for autoregister controller
	I1202 20:57:12.243117       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1202 20:57:12.243164       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1202 20:57:12.247253       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:57:12.249279       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1202 20:57:12.249752       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1202 20:57:12.296113       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1202 20:57:12.512701       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 20:57:12.543881       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 20:57:12.567675       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 20:57:12.576840       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 20:57:12.584419       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 20:57:12.625935       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.223.192"}
	I1202 20:57:12.637516       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.28.127"}
	I1202 20:57:13.145541       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 20:57:16.020410       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 20:57:16.071645       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 20:57:16.120421       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [7bbd6132314dd50edb345c367cfd40b9555ce01487136490278226bf20c9869c] <==
	I1202 20:57:15.580404       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1202 20:57:15.582700       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1202 20:57:15.584989       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 20:57:15.587412       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1202 20:57:15.589938       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1202 20:57:15.617141       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 20:57:15.617178       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1202 20:57:15.617185       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1202 20:57:15.617244       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1202 20:57:15.617231       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1202 20:57:15.617286       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1202 20:57:15.617431       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 20:57:15.617474       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1202 20:57:15.617674       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1202 20:57:15.617730       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1202 20:57:15.620099       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 20:57:15.620116       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 20:57:15.620126       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 20:57:15.622613       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1202 20:57:15.622631       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1202 20:57:15.622878       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 20:57:15.629583       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1202 20:57:15.631847       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1202 20:57:15.634154       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1202 20:57:15.638489       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [3e2d26d4dcdd30ce3fe9e663bdd5abd2a899569ad144e98d6aec4179569df0cf] <==
	I1202 20:57:13.387217       1 server_linux.go:53] "Using iptables proxy"
	I1202 20:57:13.469866       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 20:57:13.570718       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 20:57:13.570779       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1202 20:57:13.570870       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 20:57:13.590338       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 20:57:13.590408       1 server_linux.go:132] "Using iptables Proxier"
	I1202 20:57:13.596641       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 20:57:13.597187       1 server.go:527] "Version info" version="v1.34.2"
	I1202 20:57:13.597228       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:57:13.598400       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 20:57:13.598424       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 20:57:13.598430       1 config.go:200] "Starting service config controller"
	I1202 20:57:13.598449       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 20:57:13.598464       1 config.go:106] "Starting endpoint slice config controller"
	I1202 20:57:13.598469       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 20:57:13.598484       1 config.go:309] "Starting node config controller"
	I1202 20:57:13.598499       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 20:57:13.598507       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 20:57:13.699534       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 20:57:13.699597       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 20:57:13.699662       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2d91f220a3e5c81f5d5d8cdae53244fd20a45b32d3e9cef96c94d22f621da68c] <==
	I1202 20:57:11.105107       1 serving.go:386] Generated self-signed cert in-memory
	I1202 20:57:12.223460       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1202 20:57:12.223485       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:57:12.228871       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1202 20:57:12.228905       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1202 20:57:12.228903       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:57:12.228929       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:57:12.228938       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1202 20:57:12.228966       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1202 20:57:12.229388       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 20:57:12.229441       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 20:57:12.329358       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1202 20:57:12.329421       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1202 20:57:12.329358       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 20:57:16 embed-certs-386191 kubelet[740]: I1202 20:57:16.307003     740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knklp\" (UniqueName: \"kubernetes.io/projected/ed5d6b00-eaf8-41d7-90ee-e4c7a6a3f869-kube-api-access-knklp\") pod \"kubernetes-dashboard-855c9754f9-zkxsp\" (UID: \"ed5d6b00-eaf8-41d7-90ee-e4c7a6a3f869\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zkxsp"
	Dec 02 20:57:16 embed-certs-386191 kubelet[740]: I1202 20:57:16.307249     740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmjhb\" (UniqueName: \"kubernetes.io/projected/5c418dfb-12a5-4496-b536-887e6972d44b-kube-api-access-qmjhb\") pod \"dashboard-metrics-scraper-6ffb444bf9-wfkqk\" (UID: \"5c418dfb-12a5-4496-b536-887e6972d44b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfkqk"
	Dec 02 20:57:18 embed-certs-386191 kubelet[740]: I1202 20:57:18.884982     740 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 02 20:57:19 embed-certs-386191 kubelet[740]: I1202 20:57:19.030498     740 scope.go:117] "RemoveContainer" containerID="b1f45c83e32e7813a30116c3ea39f0351e6dcd89f5d0b9e454af3de6a739e648"
	Dec 02 20:57:20 embed-certs-386191 kubelet[740]: I1202 20:57:20.035949     740 scope.go:117] "RemoveContainer" containerID="b1f45c83e32e7813a30116c3ea39f0351e6dcd89f5d0b9e454af3de6a739e648"
	Dec 02 20:57:20 embed-certs-386191 kubelet[740]: I1202 20:57:20.036174     740 scope.go:117] "RemoveContainer" containerID="eb2020f7201b4a1980049db1cf35098ef46e7e67661b129617368e9376bf461c"
	Dec 02 20:57:20 embed-certs-386191 kubelet[740]: E1202 20:57:20.036385     740 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wfkqk_kubernetes-dashboard(5c418dfb-12a5-4496-b536-887e6972d44b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfkqk" podUID="5c418dfb-12a5-4496-b536-887e6972d44b"
	Dec 02 20:57:21 embed-certs-386191 kubelet[740]: I1202 20:57:21.043141     740 scope.go:117] "RemoveContainer" containerID="eb2020f7201b4a1980049db1cf35098ef46e7e67661b129617368e9376bf461c"
	Dec 02 20:57:21 embed-certs-386191 kubelet[740]: E1202 20:57:21.043357     740 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wfkqk_kubernetes-dashboard(5c418dfb-12a5-4496-b536-887e6972d44b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfkqk" podUID="5c418dfb-12a5-4496-b536-887e6972d44b"
	Dec 02 20:57:23 embed-certs-386191 kubelet[740]: I1202 20:57:23.060282     740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zkxsp" podStartSLOduration=1.15439516 podStartE2EDuration="7.060258691s" podCreationTimestamp="2025-12-02 20:57:16 +0000 UTC" firstStartedPulling="2025-12-02 20:57:16.569291906 +0000 UTC m=+6.676862837" lastFinishedPulling="2025-12-02 20:57:22.475155443 +0000 UTC m=+12.582726368" observedRunningTime="2025-12-02 20:57:23.060091652 +0000 UTC m=+13.167662595" watchObservedRunningTime="2025-12-02 20:57:23.060258691 +0000 UTC m=+13.167829636"
	Dec 02 20:57:25 embed-certs-386191 kubelet[740]: I1202 20:57:25.495448     740 scope.go:117] "RemoveContainer" containerID="eb2020f7201b4a1980049db1cf35098ef46e7e67661b129617368e9376bf461c"
	Dec 02 20:57:25 embed-certs-386191 kubelet[740]: E1202 20:57:25.495644     740 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wfkqk_kubernetes-dashboard(5c418dfb-12a5-4496-b536-887e6972d44b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfkqk" podUID="5c418dfb-12a5-4496-b536-887e6972d44b"
	Dec 02 20:57:39 embed-certs-386191 kubelet[740]: I1202 20:57:39.983888     740 scope.go:117] "RemoveContainer" containerID="eb2020f7201b4a1980049db1cf35098ef46e7e67661b129617368e9376bf461c"
	Dec 02 20:57:40 embed-certs-386191 kubelet[740]: I1202 20:57:40.094965     740 scope.go:117] "RemoveContainer" containerID="eb2020f7201b4a1980049db1cf35098ef46e7e67661b129617368e9376bf461c"
	Dec 02 20:57:40 embed-certs-386191 kubelet[740]: I1202 20:57:40.095160     740 scope.go:117] "RemoveContainer" containerID="8a430412e5cdd121d367b7b4d53b1fa49127fabd0127bc78bee44ec9f14c657b"
	Dec 02 20:57:40 embed-certs-386191 kubelet[740]: E1202 20:57:40.095391     740 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wfkqk_kubernetes-dashboard(5c418dfb-12a5-4496-b536-887e6972d44b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfkqk" podUID="5c418dfb-12a5-4496-b536-887e6972d44b"
	Dec 02 20:57:44 embed-certs-386191 kubelet[740]: I1202 20:57:44.107943     740 scope.go:117] "RemoveContainer" containerID="b5ab2cecc26850a5fcfdda9460fabe3dfee322129d5a7ffa87daa1e4390a54cb"
	Dec 02 20:57:45 embed-certs-386191 kubelet[740]: I1202 20:57:45.495927     740 scope.go:117] "RemoveContainer" containerID="8a430412e5cdd121d367b7b4d53b1fa49127fabd0127bc78bee44ec9f14c657b"
	Dec 02 20:57:45 embed-certs-386191 kubelet[740]: E1202 20:57:45.496234     740 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wfkqk_kubernetes-dashboard(5c418dfb-12a5-4496-b536-887e6972d44b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfkqk" podUID="5c418dfb-12a5-4496-b536-887e6972d44b"
	Dec 02 20:57:56 embed-certs-386191 kubelet[740]: I1202 20:57:56.983619     740 scope.go:117] "RemoveContainer" containerID="8a430412e5cdd121d367b7b4d53b1fa49127fabd0127bc78bee44ec9f14c657b"
	Dec 02 20:57:56 embed-certs-386191 kubelet[740]: E1202 20:57:56.983831     740 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wfkqk_kubernetes-dashboard(5c418dfb-12a5-4496-b536-887e6972d44b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfkqk" podUID="5c418dfb-12a5-4496-b536-887e6972d44b"
	Dec 02 20:58:02 embed-certs-386191 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 20:58:02 embed-certs-386191 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 20:58:02 embed-certs-386191 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 20:58:02 embed-certs-386191 systemd[1]: kubelet.service: Consumed 1.837s CPU time.
	
	
	==> kubernetes-dashboard [3ca826e1199be159f228fc829ee2aa57f744353729960f312b4007dab7811bd8] <==
	2025/12/02 20:57:22 Starting overwatch
	2025/12/02 20:57:22 Using namespace: kubernetes-dashboard
	2025/12/02 20:57:22 Using in-cluster config to connect to apiserver
	2025/12/02 20:57:22 Using secret token for csrf signing
	2025/12/02 20:57:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/02 20:57:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/02 20:57:22 Successful initial request to the apiserver, version: v1.34.2
	2025/12/02 20:57:22 Generating JWE encryption key
	2025/12/02 20:57:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/02 20:57:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/02 20:57:22 Initializing JWE encryption key from synchronized object
	2025/12/02 20:57:22 Creating in-cluster Sidecar client
	2025/12/02 20:57:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/02 20:57:22 Serving insecurely on HTTP port: 9090
	2025/12/02 20:57:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [b5ab2cecc26850a5fcfdda9460fabe3dfee322129d5a7ffa87daa1e4390a54cb] <==
	I1202 20:57:13.347131       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1202 20:57:43.349798       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fbbb7718121bf867e159d5cd1a6bf1edd51a7b976076819722430cf4282dc5dd] <==
	I1202 20:57:44.160028       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 20:57:44.167914       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 20:57:44.167956       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1202 20:57:44.170199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:57:47.625348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:57:51.885875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:57:55.484712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:57:58.538231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:58:01.561254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:58:01.565805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 20:58:01.565948       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 20:58:01.566096       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"34c56701-7501-4c39-8645-5294da9c60ee", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-386191_00067f6e-056e-4fd5-be99-0ad0554d7df5 became leader
	I1202 20:58:01.566122       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-386191_00067f6e-056e-4fd5-be99-0ad0554d7df5!
	W1202 20:58:01.568119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:58:01.571063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 20:58:01.666460       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-386191_00067f6e-056e-4fd5-be99-0ad0554d7df5!
	W1202 20:58:03.573978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:58:03.578414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:58:05.581746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:58:05.586264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:58:07.589806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 20:58:07.595316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-386191 -n embed-certs-386191
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-386191 -n embed-certs-386191: exit status 2 (343.690807ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-386191 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.18s)

                                                
                                    

Test pass (334/415)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 14.05
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.34.2/json-events 9.62
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.25
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.14
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.24
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.16
29 TestDownloadOnlyKic 0.43
30 TestBinaryMirror 0.85
31 TestOffline 59.72
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 133.81
40 TestAddons/serial/GCPAuth/Namespaces 0.12
41 TestAddons/serial/GCPAuth/FakeCredentials 9.44
57 TestAddons/StoppedEnableDisable 18.67
58 TestCertOptions 24.96
59 TestCertExpiration 216.98
61 TestForceSystemdFlag 29.86
62 TestForceSystemdEnv 30.83
67 TestErrorSpam/setup 20.26
68 TestErrorSpam/start 0.7
69 TestErrorSpam/status 1.01
70 TestErrorSpam/pause 5.85
71 TestErrorSpam/unpause 5.31
72 TestErrorSpam/stop 8.21
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 39.96
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 6.37
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.06
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.95
84 TestFunctional/serial/CacheCmd/cache/add_local 1.9
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.07
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
88 TestFunctional/serial/CacheCmd/cache/cache_reload 2.07
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 45.58
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.33
95 TestFunctional/serial/LogsFileCmd 1.34
96 TestFunctional/serial/InvalidService 4.47
98 TestFunctional/parallel/ConfigCmd 0.47
99 TestFunctional/parallel/DashboardCmd 8.44
100 TestFunctional/parallel/DryRun 0.41
101 TestFunctional/parallel/InternationalLanguage 0.18
102 TestFunctional/parallel/StatusCmd 1.07
107 TestFunctional/parallel/AddonsCmd 0.21
108 TestFunctional/parallel/PersistentVolumeClaim 23.6
110 TestFunctional/parallel/SSHCmd 0.66
111 TestFunctional/parallel/CpCmd 1.97
112 TestFunctional/parallel/MySQL 17.3
113 TestFunctional/parallel/FileSync 0.32
114 TestFunctional/parallel/CertSync 1.91
118 TestFunctional/parallel/NodeLabels 0.08
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.69
122 TestFunctional/parallel/License 0.73
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
127 TestFunctional/parallel/ImageCommands/ImageBuild 3.7
128 TestFunctional/parallel/ImageCommands/Setup 1.79
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.56
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 14.32
142 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
146 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
150 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
151 TestFunctional/parallel/Version/short 0.09
152 TestFunctional/parallel/Version/components 0.61
153 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
154 TestFunctional/parallel/ProfileCmd/profile_list 0.43
155 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
156 TestFunctional/parallel/MountCmd/any-port 7.74
157 TestFunctional/parallel/MountCmd/specific-port 2.03
158 TestFunctional/parallel/MountCmd/VerifyCleanup 2.03
159 TestFunctional/parallel/ServiceCmd/List 1.73
160 TestFunctional/parallel/ServiceCmd/JSONOutput 1.73
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 46.94
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 6.53
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.08
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.96
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.84
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.07
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.07
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.31
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.72
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.15
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.13
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.13
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 42.92
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.29
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.31
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 3.95
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.48
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 11.69
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.43
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.18
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 1.02
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.16
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 24.45
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.63
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.97
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 15.69
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.33
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.96
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.08
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.66
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.39
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.23
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.26
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.23
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.24
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.83
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.85
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.07
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.57
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.21
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.15
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.16
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.56
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 10.2
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.67
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.45
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.43
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.43
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 7.95
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.85
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.65
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 1.73
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 1.73
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
265 TestMultiControlPlane/serial/StartCluster 120.27
266 TestMultiControlPlane/serial/DeployApp 5.5
267 TestMultiControlPlane/serial/PingHostFromPods 1.12
268 TestMultiControlPlane/serial/AddWorkerNode 27.53
269 TestMultiControlPlane/serial/NodeLabels 0.07
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.96
271 TestMultiControlPlane/serial/CopyFile 18.42
272 TestMultiControlPlane/serial/StopSecondaryNode 14.43
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.76
274 TestMultiControlPlane/serial/RestartSecondaryNode 8.87
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.95
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 119.82
277 TestMultiControlPlane/serial/DeleteSecondaryNode 10.83
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.76
279 TestMultiControlPlane/serial/StopCluster 48.81
280 TestMultiControlPlane/serial/RestartCluster 56.44
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.75
282 TestMultiControlPlane/serial/AddSecondaryNode 44.83
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.95
288 TestJSONOutput/start/Command 37.31
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 8.07
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.26
313 TestKicCustomNetwork/create_custom_network 31.29
314 TestKicCustomNetwork/use_default_bridge_network 21.51
315 TestKicExistingNetwork 24.5
316 TestKicCustomSubnet 29.18
317 TestKicStaticIP 23.91
318 TestMainNoArgs 0.07
319 TestMinikubeProfile 50.49
322 TestMountStart/serial/StartWithMountFirst 5.15
323 TestMountStart/serial/VerifyMountFirst 0.29
324 TestMountStart/serial/StartWithMountSecond 5.05
325 TestMountStart/serial/VerifyMountSecond 0.29
326 TestMountStart/serial/DeleteFirst 1.71
327 TestMountStart/serial/VerifyMountPostDelete 0.29
328 TestMountStart/serial/Stop 1.27
329 TestMountStart/serial/RestartStopped 7.81
330 TestMountStart/serial/VerifyMountPostStop 0.29
333 TestMultiNode/serial/FreshStart2Nodes 63.83
334 TestMultiNode/serial/DeployApp2Nodes 4.21
335 TestMultiNode/serial/PingHostFrom2Pods 0.79
336 TestMultiNode/serial/AddNode 24.04
337 TestMultiNode/serial/MultiNodeLabels 0.06
338 TestMultiNode/serial/ProfileList 0.69
339 TestMultiNode/serial/CopyFile 10.42
340 TestMultiNode/serial/StopNode 2.33
341 TestMultiNode/serial/StartAfterStop 7.52
342 TestMultiNode/serial/RestartKeepsNodes 73.6
343 TestMultiNode/serial/DeleteNode 5.35
344 TestMultiNode/serial/StopMultiNode 30.52
345 TestMultiNode/serial/RestartMultiNode 50.52
346 TestMultiNode/serial/ValidateNameConflict 26.88
351 TestPreload 107.92
353 TestScheduledStopUnix 97.93
356 TestInsufficientStorage 12.18
357 TestRunningBinaryUpgrade 296.91
359 TestKubernetesUpgrade 84.1
360 TestMissingContainerUpgrade 60.19
362 TestStoppedBinaryUpgrade/Setup 3.22
363 TestPause/serial/Start 55.94
365 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
366 TestNoKubernetes/serial/StartWithK8s 39.36
367 TestStoppedBinaryUpgrade/Upgrade 310.99
368 TestNoKubernetes/serial/StartWithStopK8s 6.37
369 TestNoKubernetes/serial/Start 7.7
370 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
371 TestNoKubernetes/serial/VerifyK8sNotRunning 0.35
372 TestNoKubernetes/serial/ProfileList 1.86
373 TestNoKubernetes/serial/Stop 1.29
374 TestPause/serial/SecondStartNoReconfiguration 6.52
375 TestNoKubernetes/serial/StartNoArgs 7.48
384 TestNetworkPlugins/group/false 4.25
385 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
396 TestNetworkPlugins/group/auto/Start 41.12
397 TestNetworkPlugins/group/auto/KubeletFlags 0.31
398 TestNetworkPlugins/group/auto/NetCatPod 8.18
399 TestNetworkPlugins/group/auto/DNS 0.11
400 TestNetworkPlugins/group/auto/Localhost 0.09
401 TestNetworkPlugins/group/auto/HairPin 0.09
402 TestStoppedBinaryUpgrade/MinikubeLogs 1.28
403 TestNetworkPlugins/group/kindnet/Start 42.87
404 TestNetworkPlugins/group/calico/Start 59.15
405 TestNetworkPlugins/group/custom-flannel/Start 50.8
406 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
407 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
408 TestNetworkPlugins/group/kindnet/NetCatPod 8.2
409 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
410 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.26
411 TestNetworkPlugins/group/kindnet/DNS 0.17
412 TestNetworkPlugins/group/kindnet/Localhost 0.13
413 TestNetworkPlugins/group/kindnet/HairPin 0.14
414 TestNetworkPlugins/group/calico/ControllerPod 6.01
415 TestNetworkPlugins/group/custom-flannel/DNS 0.14
416 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
417 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
418 TestNetworkPlugins/group/calico/KubeletFlags 0.36
419 TestNetworkPlugins/group/calico/NetCatPod 9.35
420 TestNetworkPlugins/group/calico/DNS 0.13
421 TestNetworkPlugins/group/calico/Localhost 0.11
422 TestNetworkPlugins/group/enable-default-cni/Start 40.6
423 TestNetworkPlugins/group/calico/HairPin 0.13
424 TestNetworkPlugins/group/flannel/Start 50.55
425 TestNetworkPlugins/group/bridge/Start 68.64
427 TestStartStop/group/old-k8s-version/serial/FirstStart 52.46
428 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
429 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.21
430 TestNetworkPlugins/group/enable-default-cni/DNS 0.1
431 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
432 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
433 TestNetworkPlugins/group/flannel/ControllerPod 6.01
434 TestNetworkPlugins/group/flannel/KubeletFlags 0.34
435 TestNetworkPlugins/group/flannel/NetCatPod 8.22
436 TestNetworkPlugins/group/flannel/DNS 0.13
437 TestNetworkPlugins/group/flannel/Localhost 0.09
438 TestNetworkPlugins/group/flannel/HairPin 0.11
440 TestStartStop/group/no-preload/serial/FirstStart 47.38
441 TestStartStop/group/old-k8s-version/serial/DeployApp 10.32
442 TestNetworkPlugins/group/bridge/KubeletFlags 0.35
443 TestNetworkPlugins/group/bridge/NetCatPod 9.24
445 TestNetworkPlugins/group/bridge/DNS 0.14
446 TestNetworkPlugins/group/bridge/Localhost 0.11
447 TestNetworkPlugins/group/bridge/HairPin 0.12
449 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 44.33
450 TestStartStop/group/old-k8s-version/serial/Stop 16.54
451 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
452 TestStartStop/group/old-k8s-version/serial/SecondStart 51.91
454 TestStartStop/group/newest-cni/serial/FirstStart 31.88
455 TestStartStop/group/no-preload/serial/DeployApp 9.26
457 TestStartStop/group/no-preload/serial/Stop 18.22
458 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.32
459 TestStartStop/group/newest-cni/serial/DeployApp 0
462 TestStartStop/group/newest-cni/serial/Stop 3.11
463 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.36
464 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
465 TestStartStop/group/newest-cni/serial/SecondStart 10.9
466 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
467 TestStartStop/group/no-preload/serial/SecondStart 45.54
468 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
469 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
470 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.36
472 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
473 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.27
474 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.32
475 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
477 TestStartStop/group/embed-certs/serial/FirstStart 43.76
478 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
480 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
481 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
482 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
484 TestStartStop/group/embed-certs/serial/DeployApp 9.26
485 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
486 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
488 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
489 TestStartStop/group/embed-certs/serial/Stop 18.16
491 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
492 TestStartStop/group/embed-certs/serial/SecondStart 47.56
493 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
494 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
495 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
x
+
TestDownloadOnly/v1.28.0/json-events (14.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-243407 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-243407 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (14.044896056s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (14.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1202 19:54:25.536573  411032 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1202 19:54:25.536680  411032 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-243407
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-243407: exit status 85 (82.436622ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-243407 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-243407 │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:54:11
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:54:11.549731  411044 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:54:11.549838  411044 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:54:11.549843  411044 out.go:374] Setting ErrFile to fd 2...
	I1202 19:54:11.549847  411044 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:54:11.550139  411044 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	W1202 19:54:11.550304  411044 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21997-407427/.minikube/config/config.json: open /home/jenkins/minikube-integration/21997-407427/.minikube/config/config.json: no such file or directory
	I1202 19:54:11.550778  411044 out.go:368] Setting JSON to true
	I1202 19:54:11.551802  411044 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5796,"bootTime":1764699456,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 19:54:11.551879  411044 start.go:143] virtualization: kvm guest
	I1202 19:54:11.554785  411044 out.go:99] [download-only-243407] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1202 19:54:11.555014  411044 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball: no such file or directory
	I1202 19:54:11.555105  411044 notify.go:221] Checking for updates...
	I1202 19:54:11.556653  411044 out.go:171] MINIKUBE_LOCATION=21997
	I1202 19:54:11.558250  411044 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:54:11.559742  411044 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 19:54:11.561185  411044 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 19:54:11.563744  411044 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1202 19:54:11.566524  411044 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 19:54:11.566821  411044 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:54:11.591835  411044 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 19:54:11.591946  411044 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:54:11.648981  411044 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-02 19:54:11.638808339 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 19:54:11.649132  411044 docker.go:319] overlay module found
	I1202 19:54:11.650955  411044 out.go:99] Using the docker driver based on user configuration
	I1202 19:54:11.650994  411044 start.go:309] selected driver: docker
	I1202 19:54:11.651002  411044 start.go:927] validating driver "docker" against <nil>
	I1202 19:54:11.651178  411044 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:54:11.709051  411044 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-02 19:54:11.697783092 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 19:54:11.709294  411044 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 19:54:11.709906  411044 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1202 19:54:11.710101  411044 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 19:54:11.712124  411044 out.go:171] Using Docker driver with root privileges
	I1202 19:54:11.713663  411044 cni.go:84] Creating CNI manager for ""
	I1202 19:54:11.713767  411044 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:54:11.713787  411044 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 19:54:11.713885  411044 start.go:353] cluster config:
	{Name:download-only-243407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-243407 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:54:11.715664  411044 out.go:99] Starting "download-only-243407" primary control-plane node in "download-only-243407" cluster
	I1202 19:54:11.715693  411044 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:54:11.717334  411044 out.go:99] Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:54:11.717405  411044 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1202 19:54:11.717496  411044 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:54:11.735354  411044 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 19:54:11.735617  411044 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1202 19:54:11.735725  411044 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 19:54:12.067951  411044 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1202 19:54:12.067993  411044 cache.go:65] Caching tarball of preloaded images
	I1202 19:54:12.068248  411044 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1202 19:54:12.070439  411044 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1202 19:54:12.070483  411044 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1202 19:54:12.168356  411044 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1202 19:54:12.168482  411044 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-243407 host does not exist
	  To start a cluster, run: "minikube start -p download-only-243407"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-243407
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (9.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-278754 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-278754 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.621955932s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (9.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1202 19:54:35.646652  411032 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1202 19:54:35.646689  411032 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-278754
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-278754: exit status 85 (79.025813ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-243407 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-243407 │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │ 02 Dec 25 19:54 UTC │
	│ delete  │ -p download-only-243407                                                                                                                                                   │ download-only-243407 │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │ 02 Dec 25 19:54 UTC │
	│ start   │ -o=json --download-only -p download-only-278754 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-278754 │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:54:26
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:54:26.082335  411431 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:54:26.082460  411431 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:54:26.082467  411431 out.go:374] Setting ErrFile to fd 2...
	I1202 19:54:26.082473  411431 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:54:26.082678  411431 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 19:54:26.083195  411431 out.go:368] Setting JSON to true
	I1202 19:54:26.084268  411431 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5810,"bootTime":1764699456,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 19:54:26.084350  411431 start.go:143] virtualization: kvm guest
	I1202 19:54:26.086434  411431 out.go:99] [download-only-278754] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 19:54:26.086671  411431 notify.go:221] Checking for updates...
	I1202 19:54:26.087916  411431 out.go:171] MINIKUBE_LOCATION=21997
	I1202 19:54:26.089466  411431 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:54:26.090952  411431 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 19:54:26.093336  411431 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 19:54:26.094859  411431 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1202 19:54:26.097444  411431 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 19:54:26.097743  411431 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:54:26.122585  411431 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 19:54:26.122713  411431 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:54:26.178714  411431 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-02 19:54:26.168958116 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 19:54:26.178827  411431 docker.go:319] overlay module found
	I1202 19:54:26.180377  411431 out.go:99] Using the docker driver based on user configuration
	I1202 19:54:26.180412  411431 start.go:309] selected driver: docker
	I1202 19:54:26.180422  411431 start.go:927] validating driver "docker" against <nil>
	I1202 19:54:26.180528  411431 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:54:26.240367  411431 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-02 19:54:26.230149522 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 19:54:26.240550  411431 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 19:54:26.241039  411431 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1202 19:54:26.241212  411431 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 19:54:26.242887  411431 out.go:171] Using Docker driver with root privileges
	I1202 19:54:26.244140  411431 cni.go:84] Creating CNI manager for ""
	I1202 19:54:26.244290  411431 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:54:26.244306  411431 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 19:54:26.244393  411431 start.go:353] cluster config:
	{Name:download-only-278754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-278754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:54:26.245552  411431 out.go:99] Starting "download-only-278754" primary control-plane node in "download-only-278754" cluster
	I1202 19:54:26.245568  411431 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:54:26.246566  411431 out.go:99] Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:54:26.246596  411431 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:54:26.246714  411431 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:54:26.264658  411431 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 19:54:26.264786  411431 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1202 19:54:26.264822  411431 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory, skipping pull
	I1202 19:54:26.264831  411431 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in cache, skipping pull
	I1202 19:54:26.264838  411431 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	I1202 19:54:26.337432  411431 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 19:54:26.337488  411431 cache.go:65] Caching tarball of preloaded images
	I1202 19:54:26.337702  411431 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:54:26.339376  411431 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1202 19:54:26.339403  411431 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1202 19:54:26.439583  411431 preload.go:295] Got checksum from GCS API "40ac2ac600e3e4b9dc7a3f8c6cb2ed91"
	I1202 19:54:26.439633  411431 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:40ac2ac600e3e4b9dc7a3f8c6cb2ed91 -> /home/jenkins/minikube-integration/21997-407427/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-278754 host does not exist
	  To start a cluster, run: "minikube start -p download-only-278754"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-278754
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-993370 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-993370 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.134660112s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
--- PASS: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
--- PASS: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-993370
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-993370: exit status 85 (80.550372ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-243407 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-243407 │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │ 02 Dec 25 19:54 UTC │
	│ delete  │ -p download-only-243407                                                                                                                                                          │ download-only-243407 │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │ 02 Dec 25 19:54 UTC │
	│ start   │ -o=json --download-only -p download-only-278754 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-278754 │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │ 02 Dec 25 19:54 UTC │
	│ delete  │ -p download-only-278754                                                                                                                                                          │ download-only-278754 │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │ 02 Dec 25 19:54 UTC │
	│ start   │ -o=json --download-only -p download-only-993370 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-993370 │ jenkins │ v1.37.0 │ 02 Dec 25 19:54 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:54:36
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:54:36.183155  411795 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:54:36.183459  411795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:54:36.183470  411795 out.go:374] Setting ErrFile to fd 2...
	I1202 19:54:36.183475  411795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:54:36.183697  411795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 19:54:36.184219  411795 out.go:368] Setting JSON to true
	I1202 19:54:36.185147  411795 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5820,"bootTime":1764699456,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 19:54:36.185216  411795 start.go:143] virtualization: kvm guest
	I1202 19:54:36.187109  411795 out.go:99] [download-only-993370] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 19:54:36.187309  411795 notify.go:221] Checking for updates...
	I1202 19:54:36.188332  411795 out.go:171] MINIKUBE_LOCATION=21997
	I1202 19:54:36.189706  411795 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:54:36.191247  411795 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 19:54:36.192549  411795 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 19:54:36.193917  411795 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1202 19:54:36.196197  411795 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 19:54:36.196435  411795 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:54:36.220973  411795 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 19:54:36.221098  411795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:54:36.278172  411795 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-02 19:54:36.268415931 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 19:54:36.278312  411795 docker.go:319] overlay module found
	I1202 19:54:36.280112  411795 out.go:99] Using the docker driver based on user configuration
	I1202 19:54:36.280204  411795 start.go:309] selected driver: docker
	I1202 19:54:36.280218  411795 start.go:927] validating driver "docker" against <nil>
	I1202 19:54:36.280398  411795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:54:36.337576  411795 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-02 19:54:36.328176331 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 19:54:36.337761  411795 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 19:54:36.338381  411795 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1202 19:54:36.338549  411795 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 19:54:36.340408  411795 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-993370 host does not exist
	  To start a cluster, run: "minikube start -p download-only-993370"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-993370
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-261487 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-261487" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-261487
--- PASS: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestBinaryMirror (0.85s)

                                                
                                                
=== RUN   TestBinaryMirror
I1202 19:54:40.780133  411032 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-599276 --alsologtostderr --binary-mirror http://127.0.0.1:35789 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-599276" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-599276
--- PASS: TestBinaryMirror (0.85s)

                                                
                                    
x
+
TestOffline (59.72s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-750983 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-750983 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (57.113030823s)
helpers_test.go:175: Cleaning up "offline-crio-750983" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-750983
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-750983: (2.608348561s)
--- PASS: TestOffline (59.72s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-893295
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-893295: exit status 85 (70.886925ms)

                                                
                                                
-- stdout --
	* Profile "addons-893295" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-893295"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-893295
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-893295: exit status 85 (70.910884ms)

                                                
                                                
-- stdout --
	* Profile "addons-893295" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-893295"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (133.81s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-893295 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-893295 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m13.813312714s)
--- PASS: TestAddons/Setup (133.81s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-893295 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-893295 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-893295 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-893295 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [76f9d798-f2b6-4d6f-9f6c-3fba90dc0c01] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [76f9d798-f2b6-4d6f-9f6c-3fba90dc0c01] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003613859s
addons_test.go:694: (dbg) Run:  kubectl --context addons-893295 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-893295 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-893295 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.44s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.67s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-893295
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-893295: (18.35630229s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-893295
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-893295
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-893295
--- PASS: TestAddons/StoppedEnableDisable (18.67s)

                                                
                                    
x
+
TestCertOptions (24.96s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-418595 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-418595 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (21.126543356s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-418595 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-418595 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-418595 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-418595" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-418595
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-418595: (2.962091619s)
--- PASS: TestCertOptions (24.96s)

                                                
                                    
x
+
TestCertExpiration (216.98s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-877706 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-877706 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (28.183193262s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-877706 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E1202 20:51:39.166790  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-877706 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.162806698s)
helpers_test.go:175: Cleaning up "cert-expiration-877706" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-877706
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-877706: (2.632385575s)
--- PASS: TestCertExpiration (216.98s)

                                                
                                    
x
+
TestForceSystemdFlag (29.86s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-926783 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1202 20:47:47.562348  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-536475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-926783 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.565306227s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-926783 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-926783" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-926783
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-926783: (4.927191094s)
--- PASS: TestForceSystemdFlag (29.86s)

                                                
                                    
x
+
TestForceSystemdEnv (30.83s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-959456 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-959456 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.692565854s)
helpers_test.go:175: Cleaning up "force-systemd-env-959456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-959456
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-959456: (3.135837632s)
--- PASS: TestForceSystemdEnv (30.83s)

                                                
                                    
x
+
TestErrorSpam/setup (20.26s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-763900 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-763900 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-763900 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-763900 --driver=docker  --container-runtime=crio: (20.255425715s)
--- PASS: TestErrorSpam/setup (20.26s)

                                                
                                    
x
+
TestErrorSpam/start (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 start --dry-run
--- PASS: TestErrorSpam/start (0.70s)

                                                
                                    
x
+
TestErrorSpam/status (1.01s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 status
--- PASS: TestErrorSpam/status (1.01s)

                                                
                                    
x
+
TestErrorSpam/pause (5.85s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 pause: exit status 80 (1.820860681s)

                                                
                                                
-- stdout --
	* Pausing node nospam-763900 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:00:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 pause: exit status 80 (1.916942591s)

                                                
                                                
-- stdout --
	* Pausing node nospam-763900 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:00:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 pause: exit status 80 (2.113703523s)

                                                
                                                
-- stdout --
	* Pausing node nospam-763900 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:00:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.85s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.31s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 unpause: exit status 80 (1.735163092s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-763900 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:00:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 unpause: exit status 80 (1.620704689s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-763900 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:00:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 unpause: exit status 80 (1.95713705s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-763900 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:00:46Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.31s)

                                                
                                    
x
+
TestErrorSpam/stop (8.21s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 stop: (7.97891558s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-763900 --log_dir /tmp/nospam-763900 stop
--- PASS: TestErrorSpam/stop (8.21s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/test/nested/copy/411032/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (39.96s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-536475 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-536475 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (39.963669057s)
--- PASS: TestFunctional/serial/StartWithProxy (39.96s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.37s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1202 20:01:39.330558  411032 config.go:182] Loaded profile config "functional-536475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-536475 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-536475 --alsologtostderr -v=8: (6.366751683s)
functional_test.go:678: soft start took 6.369373719s for "functional-536475" cluster.
I1202 20:01:45.698505  411032 config.go:182] Loaded profile config "functional-536475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (6.37s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-536475 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-536475 cache add registry.k8s.io/pause:3.1: (1.009636924s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-536475 /tmp/TestFunctionalserialCacheCmdcacheadd_local3393648236/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 cache add minikube-local-cache-test:functional-536475
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-536475 cache add minikube-local-cache-test:functional-536475: (1.515234795s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 cache delete minikube-local-cache-test:functional-536475
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-536475
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-536475 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (304.523911ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-536475 cache reload: (1.112688082s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 kubectl -- --context functional-536475 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-536475 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.58s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-536475 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1202 20:01:56.098298  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:01:56.104749  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:01:56.116199  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:01:56.137671  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:01:56.179196  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:01:56.260661  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:01:56.422307  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:01:56.744235  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:01:57.386281  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:01:58.668132  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:02:01.231104  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:02:06.352621  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:02:16.594895  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:02:37.076251  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-536475 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.581558859s)
functional_test.go:776: restart took 45.581703482s for "functional-536475" cluster.
I1202 20:02:39.150116  411032 config.go:182] Loaded profile config "functional-536475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (45.58s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-536475 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-536475 logs: (1.327131213s)
--- PASS: TestFunctional/serial/LogsCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 logs --file /tmp/TestFunctionalserialLogsFileCmd540988753/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-536475 logs --file /tmp/TestFunctionalserialLogsFileCmd540988753/001/logs.txt: (1.339103649s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.47s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-536475 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-536475
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-536475: exit status 115 (376.840809ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30296 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-536475 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.47s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-536475 config get cpus: exit status 14 (88.742451ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-536475 config get cpus: exit status 14 (77.105912ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-536475 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-536475 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 451352: os: process already finished
E1202 20:04:39.959573  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:06:56.097326  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:07:23.801553  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:11:56.096610  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/DashboardCmd (8.44s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-536475 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-536475 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (173.691716ms)

                                                
                                                
-- stdout --
	* [functional-536475] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 20:03:19.102553  450514 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:03:19.102650  450514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:03:19.102655  450514 out.go:374] Setting ErrFile to fd 2...
	I1202 20:03:19.102659  450514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:03:19.102852  450514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:03:19.103331  450514 out.go:368] Setting JSON to false
	I1202 20:03:19.104348  450514 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6343,"bootTime":1764699456,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:03:19.104412  450514 start.go:143] virtualization: kvm guest
	I1202 20:03:19.106430  450514 out.go:179] * [functional-536475] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 20:03:19.107748  450514 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:03:19.107754  450514 notify.go:221] Checking for updates...
	I1202 20:03:19.109216  450514 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:03:19.110730  450514 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:03:19.112206  450514 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 20:03:19.113500  450514 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:03:19.114958  450514 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:03:19.116692  450514 config.go:182] Loaded profile config "functional-536475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:03:19.117326  450514 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:03:19.141946  450514 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 20:03:19.142040  450514 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:03:19.203167  450514 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 20:03:19.192223598 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:03:19.203286  450514 docker.go:319] overlay module found
	I1202 20:03:19.205502  450514 out.go:179] * Using the docker driver based on existing profile
	I1202 20:03:19.206585  450514 start.go:309] selected driver: docker
	I1202 20:03:19.206606  450514 start.go:927] validating driver "docker" against &{Name:functional-536475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-536475 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:03:19.206698  450514 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:03:19.208473  450514 out.go:203] 
	W1202 20:03:19.209563  450514 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1202 20:03:19.210583  450514 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-536475 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-536475 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-536475 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (178.636173ms)

                                                
                                                
-- stdout --
	* [functional-536475] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 20:03:19.514807  450734 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:03:19.515103  450734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:03:19.515114  450734 out.go:374] Setting ErrFile to fd 2...
	I1202 20:03:19.515118  450734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:03:19.515455  450734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:03:19.515940  450734 out.go:368] Setting JSON to false
	I1202 20:03:19.517003  450734 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6344,"bootTime":1764699456,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:03:19.517088  450734 start.go:143] virtualization: kvm guest
	I1202 20:03:19.518607  450734 out.go:179] * [functional-536475] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1202 20:03:19.519957  450734 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:03:19.519959  450734 notify.go:221] Checking for updates...
	I1202 20:03:19.522291  450734 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:03:19.523431  450734 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:03:19.524587  450734 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 20:03:19.525629  450734 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:03:19.526551  450734 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:03:19.528031  450734 config.go:182] Loaded profile config "functional-536475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:03:19.528657  450734 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:03:19.554866  450734 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 20:03:19.554962  450734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:03:19.618156  450734 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 20:03:19.6067245 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:03:19.618287  450734 docker.go:319] overlay module found
	I1202 20:03:19.619766  450734 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1202 20:03:19.620763  450734 start.go:309] selected driver: docker
	I1202 20:03:19.620779  450734 start.go:927] validating driver "docker" against &{Name:functional-536475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-536475 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:03:19.620882  450734 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:03:19.622670  450734 out.go:203] 
	W1202 20:03:19.624214  450734 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1202 20:03:19.625163  450734 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [3ea8d3dd-688a-41a4-8059-3b06e4f2dbda] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003478019s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-536475 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-536475 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-536475 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-536475 apply -f testdata/storage-provisioner/pod.yaml
I1202 20:03:01.447291  411032 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [21cf7c72-3cd8-4622-a161-080c43805f90] Pending
helpers_test.go:352: "sp-pod" [21cf7c72-3cd8-4622-a161-080c43805f90] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [21cf7c72-3cd8-4622-a161-080c43805f90] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003825365s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-536475 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-536475 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-536475 apply -f testdata/storage-provisioner/pod.yaml
I1202 20:03:12.572411  411032 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [55cb829e-20a4-408f-8ffc-bc74fdd780f4] Pending
helpers_test.go:352: "sp-pod" [55cb829e-20a4-408f-8ffc-bc74fdd780f4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [55cb829e-20a4-408f-8ffc-bc74fdd780f4] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004304599s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-536475 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.60s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh -n functional-536475 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 cp functional-536475:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2210228552/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh -n functional-536475 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh -n functional-536475 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (17.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-536475 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-j8cq5" [fc118030-75ba-44f6-a2e3-b4479ac8818b] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-j8cq5" [fc118030-75ba-44f6-a2e3-b4479ac8818b] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 14.004469636s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-536475 exec mysql-5bb876957f-j8cq5 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-536475 exec mysql-5bb876957f-j8cq5 -- mysql -ppassword -e "show databases;": exit status 1 (97.184646ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1202 20:03:01.664001  411032 retry.go:31] will retry after 1.010807487s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-536475 exec mysql-5bb876957f-j8cq5 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-536475 exec mysql-5bb876957f-j8cq5 -- mysql -ppassword -e "show databases;": exit status 1 (92.750284ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1202 20:03:02.768820  411032 retry.go:31] will retry after 1.778747279s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-536475 exec mysql-5bb876957f-j8cq5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (17.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/411032/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh "sudo cat /etc/test/nested/copy/411032/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/411032.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh "sudo cat /etc/ssl/certs/411032.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/411032.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh "sudo cat /usr/share/ca-certificates/411032.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4110322.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh "sudo cat /etc/ssl/certs/4110322.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4110322.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh "sudo cat /usr/share/ca-certificates/4110322.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-536475 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-536475 ssh "sudo systemctl is-active docker": exit status 1 (340.259343ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-536475 ssh "sudo systemctl is-active containerd": exit status 1 (344.830939ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-536475 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-536475 image ls --format short --alsologtostderr:
I1202 20:03:20.228846  451206 out.go:360] Setting OutFile to fd 1 ...
I1202 20:03:20.229157  451206 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:03:20.229169  451206 out.go:374] Setting ErrFile to fd 2...
I1202 20:03:20.229173  451206 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:03:20.229380  451206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
I1202 20:03:20.230021  451206 config.go:182] Loaded profile config "functional-536475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 20:03:20.230171  451206 config.go:182] Loaded profile config "functional-536475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 20:03:20.230651  451206 cli_runner.go:164] Run: docker container inspect functional-536475 --format={{.State.Status}}
I1202 20:03:20.252458  451206 ssh_runner.go:195] Run: systemctl --version
I1202 20:03:20.252543  451206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-536475
I1202 20:03:20.272986  451206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/functional-536475/id_rsa Username:docker}
I1202 20:03:20.373523  451206 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-536475 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ localhost/my-image                      │ functional-536475  │ d87ba05eedd96 │ 1.47MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-536475 image ls --format table --alsologtostderr:
I1202 20:03:24.740728  452043 out.go:360] Setting OutFile to fd 1 ...
I1202 20:03:24.740878  452043 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:03:24.740890  452043 out.go:374] Setting ErrFile to fd 2...
I1202 20:03:24.740897  452043 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:03:24.741197  452043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
I1202 20:03:24.741787  452043 config.go:182] Loaded profile config "functional-536475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 20:03:24.741883  452043 config.go:182] Loaded profile config "functional-536475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 20:03:24.742432  452043 cli_runner.go:164] Run: docker container inspect functional-536475 --format={{.State.Status}}
I1202 20:03:24.766407  452043 ssh_runner.go:195] Run: systemctl --version
I1202 20:03:24.766528  452043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-536475
I1202 20:03:24.792965  452043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/functional-536475/id_rsa Username:docker}
I1202 20:03:24.902142  452043 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-536475 image ls --format json --alsologtostderr:
[{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"01e8bacf0f50095b9b
12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b
9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6a73d3c29e89f2cd141020042231cc011e368fe10c6c917008ba62cece4b50af","repoDigests":["docker.io/library/dcf72160cdd3fc80bd65a39018d69473868d1ba3c9c8b2f59194f084d790f98d-tmp@sha256:649ec542e64e6a5cfa78839da25e512b4937ff121af0b55eabc66086be50e19a"],"repoTags":[],"size":"1466131"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikub
e/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6a
d6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","r
egistry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2c
cd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"d87ba05eedd9685da19469bf1a4c9191c799c1a2b58ab79ba3874b11d2a48e58","repoDigests":["localhost/my-image@sha256:f6dc10eacb5d8c67378ade48c490275a761f021f1119f670e6cdadfc78ceb0a5"],"repoTags":["localhost/my-image:functional-536475"],"size":"1468744"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-536475 image ls --format json --alsologtostderr:
I1202 20:03:24.454355  451993 out.go:360] Setting OutFile to fd 1 ...
I1202 20:03:24.454695  451993 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:03:24.454706  451993 out.go:374] Setting ErrFile to fd 2...
I1202 20:03:24.454712  451993 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:03:24.454996  451993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
I1202 20:03:24.455931  451993 config.go:182] Loaded profile config "functional-536475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 20:03:24.456113  451993 config.go:182] Loaded profile config "functional-536475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 20:03:24.456792  451993 cli_runner.go:164] Run: docker container inspect functional-536475 --format={{.State.Status}}
I1202 20:03:24.481621  451993 ssh_runner.go:195] Run: systemctl --version
I1202 20:03:24.481679  451993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-536475
I1202 20:03:24.505436  451993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/functional-536475/id_rsa Username:docker}
I1202 20:03:24.617160  451993 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-536475 image ls --format yaml --alsologtostderr:
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-536475 image ls --format yaml --alsologtostderr:
I1202 20:03:20.480210  451287 out.go:360] Setting OutFile to fd 1 ...
I1202 20:03:20.480350  451287 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:03:20.480361  451287 out.go:374] Setting ErrFile to fd 2...
I1202 20:03:20.480365  451287 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:03:20.480630  451287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
I1202 20:03:20.481380  451287 config.go:182] Loaded profile config "functional-536475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 20:03:20.481499  451287 config.go:182] Loaded profile config "functional-536475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 20:03:20.482020  451287 cli_runner.go:164] Run: docker container inspect functional-536475 --format={{.State.Status}}
I1202 20:03:20.501099  451287 ssh_runner.go:195] Run: systemctl --version
I1202 20:03:20.501157  451287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-536475
I1202 20:03:20.522098  451287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/functional-536475/id_rsa Username:docker}
I1202 20:03:20.628283  451287 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-536475 ssh pgrep buildkitd: exit status 1 (313.265976ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 image build -t localhost/my-image:functional-536475 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-536475 image build -t localhost/my-image:functional-536475 testdata/build --alsologtostderr: (3.093686497s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-536475 image build -t localhost/my-image:functional-536475 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 6a73d3c29e8
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-536475
--> d87ba05eedd
Successfully tagged localhost/my-image:functional-536475
d87ba05eedd9685da19469bf1a4c9191c799c1a2b58ab79ba3874b11d2a48e58
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-536475 image build -t localhost/my-image:functional-536475 testdata/build --alsologtostderr:
I1202 20:03:21.056785  451490 out.go:360] Setting OutFile to fd 1 ...
I1202 20:03:21.057400  451490 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:03:21.057449  451490 out.go:374] Setting ErrFile to fd 2...
I1202 20:03:21.057457  451490 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:03:21.057925  451490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
I1202 20:03:21.059026  451490 config.go:182] Loaded profile config "functional-536475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 20:03:21.059790  451490 config.go:182] Loaded profile config "functional-536475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 20:03:21.060295  451490 cli_runner.go:164] Run: docker container inspect functional-536475 --format={{.State.Status}}
I1202 20:03:21.080229  451490 ssh_runner.go:195] Run: systemctl --version
I1202 20:03:21.080296  451490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-536475
I1202 20:03:21.099008  451490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/functional-536475/id_rsa Username:docker}
I1202 20:03:21.200323  451490 build_images.go:162] Building image from path: /tmp/build.2862544573.tar
I1202 20:03:21.200387  451490 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1202 20:03:21.209369  451490 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2862544573.tar
I1202 20:03:21.213802  451490 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2862544573.tar: stat -c "%s %y" /var/lib/minikube/build/build.2862544573.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2862544573.tar': No such file or directory
I1202 20:03:21.213836  451490 ssh_runner.go:362] scp /tmp/build.2862544573.tar --> /var/lib/minikube/build/build.2862544573.tar (3072 bytes)
I1202 20:03:21.234383  451490 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2862544573
I1202 20:03:21.243143  451490 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2862544573 -xf /var/lib/minikube/build/build.2862544573.tar
I1202 20:03:21.251940  451490 crio.go:315] Building image: /var/lib/minikube/build/build.2862544573
I1202 20:03:21.252040  451490 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-536475 /var/lib/minikube/build/build.2862544573 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1202 20:03:24.055032  451490 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-536475 /var/lib/minikube/build/build.2862544573 --cgroup-manager=cgroupfs: (2.802958632s)
I1202 20:03:24.055123  451490 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2862544573
I1202 20:03:24.065933  451490 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2862544573.tar
I1202 20:03:24.075701  451490 build_images.go:218] Built localhost/my-image:functional-536475 from /tmp/build.2862544573.tar
I1202 20:03:24.075747  451490 build_images.go:134] succeeded building to: functional-536475
I1202 20:03:24.075753  451490 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.762715542s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-536475
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-536475 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-536475 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-536475 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-536475 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 445336: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-536475 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-536475 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [c304ad94-b71c-4589-bc0b-36a5cc8eed43] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [c304ad94-b71c-4589-bc0b-36a5cc8eed43] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 14.005235513s
I1202 20:03:04.598139  411032 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 image rm kicbase/echo-server:functional-536475 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-536475 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.125.9 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-536475 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 version -o=json --components
2025/12/02 20:03:27 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "359.447496ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "66.887791ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "354.830503ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "64.601049ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-536475 /tmp/TestFunctionalparallelMountCmdany-port3683820948/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764705787246922321" to /tmp/TestFunctionalparallelMountCmdany-port3683820948/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764705787246922321" to /tmp/TestFunctionalparallelMountCmdany-port3683820948/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764705787246922321" to /tmp/TestFunctionalparallelMountCmdany-port3683820948/001/test-1764705787246922321
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-536475 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (295.309464ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 20:03:07.542551  411032 retry.go:31] will retry after 409.940441ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  2 20:03 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  2 20:03 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  2 20:03 test-1764705787246922321
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh cat /mount-9p/test-1764705787246922321
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-536475 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [04d2da08-52fe-4838-82f6-c5aca804673d] Pending
helpers_test.go:352: "busybox-mount" [04d2da08-52fe-4838-82f6-c5aca804673d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [04d2da08-52fe-4838-82f6-c5aca804673d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [04d2da08-52fe-4838-82f6-c5aca804673d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003108378s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-536475 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-536475 /tmp/TestFunctionalparallelMountCmdany-port3683820948/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-536475 /tmp/TestFunctionalparallelMountCmdspecific-port2225979478/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-536475 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (293.325108ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 20:03:15.279407  411032 retry.go:31] will retry after 661.640218ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-536475 /tmp/TestFunctionalparallelMountCmdspecific-port2225979478/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-536475 ssh "sudo umount -f /mount-9p": exit status 1 (285.276648ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-536475 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-536475 /tmp/TestFunctionalparallelMountCmdspecific-port2225979478/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-536475 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2340587714/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-536475 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2340587714/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-536475 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2340587714/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-536475 ssh "findmnt -T" /mount1: exit status 1 (371.81401ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 20:03:17.386675  411032 retry.go:31] will retry after 733.560387ms: exit status 1
E1202 20:03:18.037885  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-536475 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-536475 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2340587714/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-536475 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2340587714/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-536475 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2340587714/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-536475 service list: (1.726650518s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-536475 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-536475 service list -o json: (1.732652157s)
functional_test.go:1504: Took "1.732778032s" to run "out/minikube-linux-amd64 -p functional-536475 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.73s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-536475
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-536475
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-536475
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21997-407427/.minikube/files/etc/test/nested/copy/411032/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (46.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-136749 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-136749 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (46.940065757s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (46.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1202 20:13:58.043036  411032 config.go:182] Loaded profile config "functional-136749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-136749 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-136749 --alsologtostderr -v=8: (6.531609731s)
functional_test.go:678: soft start took 6.53241607s for "functional-136749" cluster.
I1202 20:14:04.575475  411032 config.go:182] Loaded profile config "functional-136749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-136749 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-136749 cache add registry.k8s.io/pause:3.3: (1.023378808s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-136749 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach3603886046/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 cache add minikube-local-cache-test:functional-136749
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-136749 cache add minikube-local-cache-test:functional-136749: (1.517532581s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 cache delete minikube-local-cache-test:functional-136749
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-136749
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136749 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (308.035804ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 kubectl -- --context functional-136749 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-136749 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (42.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-136749 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-136749 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.915098939s)
functional_test.go:776: restart took 42.91523295s for "functional-136749" cluster.
I1202 20:14:54.988236  411032 config.go:182] Loaded profile config "functional-136749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (42.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-136749 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-136749 logs: (1.283351544s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1180896405/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-136749 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1180896405/001/logs.txt: (1.308976413s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (3.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-136749 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-136749
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-136749: exit status 115 (372.225552ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32020 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-136749 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (3.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136749 config get cpus: exit status 14 (88.438708ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136749 config get cpus: exit status 14 (88.647278ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (11.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-136749 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-136749 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 473731: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (11.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-136749 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-136749 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (191.044149ms)

                                                
                                                
-- stdout --
	* [functional-136749] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 20:15:28.955816  473211 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:15:28.956034  473211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:15:28.956043  473211 out.go:374] Setting ErrFile to fd 2...
	I1202 20:15:28.956047  473211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:15:28.956338  473211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:15:28.956778  473211 out.go:368] Setting JSON to false
	I1202 20:15:28.957818  473211 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7073,"bootTime":1764699456,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:15:28.957882  473211 start.go:143] virtualization: kvm guest
	I1202 20:15:28.960318  473211 out.go:179] * [functional-136749] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 20:15:28.961567  473211 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:15:28.961607  473211 notify.go:221] Checking for updates...
	I1202 20:15:28.963794  473211 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:15:28.964898  473211 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:15:28.966030  473211 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 20:15:28.967831  473211 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:15:28.969038  473211 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:15:28.970704  473211 config.go:182] Loaded profile config "functional-136749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:15:28.971251  473211 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:15:28.999889  473211 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 20:15:29.000018  473211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:15:29.065474  473211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 20:15:29.052380154 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:15:29.065655  473211 docker.go:319] overlay module found
	I1202 20:15:29.067437  473211 out.go:179] * Using the docker driver based on existing profile
	I1202 20:15:29.068766  473211 start.go:309] selected driver: docker
	I1202 20:15:29.068788  473211 start.go:927] validating driver "docker" against &{Name:functional-136749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-136749 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:15:29.068916  473211 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:15:29.070658  473211 out.go:203] 
	W1202 20:15:29.072329  473211 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1202 20:15:29.074668  473211 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-136749 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-136749 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-136749 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (176.107208ms)

                                                
                                                
-- stdout --
	* [functional-136749] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 20:15:29.380198  473450 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:15:29.380519  473450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:15:29.380530  473450 out.go:374] Setting ErrFile to fd 2...
	I1202 20:15:29.380535  473450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:15:29.380899  473450 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:15:29.381414  473450 out.go:368] Setting JSON to false
	I1202 20:15:29.382444  473450 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7073,"bootTime":1764699456,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:15:29.382514  473450 start.go:143] virtualization: kvm guest
	I1202 20:15:29.384888  473450 out.go:179] * [functional-136749] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1202 20:15:29.386905  473450 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:15:29.386917  473450 notify.go:221] Checking for updates...
	I1202 20:15:29.389461  473450 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:15:29.391013  473450 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:15:29.392707  473450 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 20:15:29.394027  473450 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:15:29.395218  473450 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:15:29.396773  473450 config.go:182] Loaded profile config "functional-136749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:15:29.397381  473450 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:15:29.424926  473450 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 20:15:29.425119  473450 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:15:29.480277  473450 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 20:15:29.470524655 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:15:29.480396  473450 docker.go:319] overlay module found
	I1202 20:15:29.482324  473450 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1202 20:15:29.483865  473450 start.go:309] selected driver: docker
	I1202 20:15:29.483888  473450 start.go:927] validating driver "docker" against &{Name:functional-136749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-136749 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:15:29.483992  473450 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:15:29.486054  473450 out.go:203] 
	W1202 20:15:29.487629  473450 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1202 20:15:29.489028  473450 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (24.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [5f1e40bc-3672-4470-ae7b-130bd041c7a6] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003709285s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-136749 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-136749 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-136749 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-136749 apply -f testdata/storage-provisioner/pod.yaml
I1202 20:15:09.804229  411032 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [4d5d2e85-fb88-44e9-8034-62307265b8d3] Pending
helpers_test.go:352: "sp-pod" [4d5d2e85-fb88-44e9-8034-62307265b8d3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [4d5d2e85-fb88-44e9-8034-62307265b8d3] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003802361s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-136749 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-136749 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-136749 apply -f testdata/storage-provisioner/pod.yaml
I1202 20:15:20.784158  411032 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [debf5a13-ecbd-46ec-b8a3-9037b2636338] Pending
helpers_test.go:352: "sp-pod" [debf5a13-ecbd-46ec-b8a3-9037b2636338] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [debf5a13-ecbd-46ec-b8a3-9037b2636338] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004350892s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-136749 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (24.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh -n functional-136749 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 cp functional-136749:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1592355970/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh -n functional-136749 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh -n functional-136749 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (15.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-136749 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-844cf969f6-gdjq7" [fd1a552a-b2df-476a-8de7-052bb06dec6a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-844cf969f6-gdjq7" [fd1a552a-b2df-476a-8de7-052bb06dec6a] Running
2025/12/02 20:15:40 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 14.003743281s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-136749 exec mysql-844cf969f6-gdjq7 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-136749 exec mysql-844cf969f6-gdjq7 -- mysql -ppassword -e "show databases;": exit status 1 (94.106109ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1202 20:15:43.153771  411032 retry.go:31] will retry after 1.322808901s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-136749 exec mysql-844cf969f6-gdjq7 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (15.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/411032/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh "sudo cat /etc/test/nested/copy/411032/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/411032.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh "sudo cat /etc/ssl/certs/411032.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/411032.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh "sudo cat /usr/share/ca-certificates/411032.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4110322.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh "sudo cat /etc/ssl/certs/4110322.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4110322.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh "sudo cat /usr/share/ca-certificates/4110322.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-136749 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136749 ssh "sudo systemctl is-active docker": exit status 1 (330.555643ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136749 ssh "sudo systemctl is-active containerd": exit status 1 (326.119154ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-136749 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-136749 image ls --format short --alsologtostderr:
I1202 20:15:42.410100  474755 out.go:360] Setting OutFile to fd 1 ...
I1202 20:15:42.410239  474755 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:15:42.410252  474755 out.go:374] Setting ErrFile to fd 2...
I1202 20:15:42.410259  474755 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:15:42.410493  474755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
I1202 20:15:42.411055  474755 config.go:182] Loaded profile config "functional-136749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 20:15:42.411164  474755 config.go:182] Loaded profile config "functional-136749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 20:15:42.411618  474755 cli_runner.go:164] Run: docker container inspect functional-136749 --format={{.State.Status}}
I1202 20:15:42.429352  474755 ssh_runner.go:195] Run: systemctl --version
I1202 20:15:42.429402  474755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-136749
I1202 20:15:42.447999  474755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/functional-136749/id_rsa Username:docker}
I1202 20:15:42.547128  474755 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-136749 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 740kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-136749 image ls --format table --alsologtostderr:
I1202 20:15:44.866473  475165 out.go:360] Setting OutFile to fd 1 ...
I1202 20:15:44.866737  475165 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:15:44.866748  475165 out.go:374] Setting ErrFile to fd 2...
I1202 20:15:44.866755  475165 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:15:44.867016  475165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
I1202 20:15:44.867600  475165 config.go:182] Loaded profile config "functional-136749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 20:15:44.867733  475165 config.go:182] Loaded profile config "functional-136749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 20:15:44.868267  475165 cli_runner.go:164] Run: docker container inspect functional-136749 --format={{.State.Status}}
I1202 20:15:44.887267  475165 ssh_runner.go:195] Run: systemctl --version
I1202 20:15:44.887315  475165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-136749
I1202 20:15:44.908948  475165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/functional-136749/id_rsa Username:docker}
I1202 20:15:45.010675  475165 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-136749 image ls --format json --alsologtostderr:
[{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/bu
sybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31468661"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:dfca5e5f4caae19c3ac20d841ab02fe19647ef0dd97c41424007cceb417af7db"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79190589"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTag
s":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:0ed737a63ad50cf0d7049b0bd88755be8d5bc9fb5e39efdece79639b998532f6"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71976228"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"siz
e":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579e
f2b1","repoDigests":["registry.k8s.io/etcd@sha256:09c404d47c88be54eaaf0af6edaecdc1a417bcf04522ffeaf62c4dc0ed5a6d10"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63582165"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5e3bd70d468022881b995e23abf02a2d39ee87ebacd7018f6c478d9e01870b8b"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76869776"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:a8ad62a46c568df922febd0986d02f88bfe5e1a8f5e8dd0bd02a0cafffba019b"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"739536"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:dd50de52ebf30a673c65da77c8b4af5cbc6be3c475a2d8165796a7a7bdd0b9d5"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90816810"},{"id":"7bb6219ddab95bdabbef83f05
1bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:f852fad6b028092c481b57e7fcd16936a8aec43c2e4dccf5a0600946a449c2a3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52744336"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-136749 image ls --format json --alsologtostderr:
I1202 20:15:44.624924  475113 out.go:360] Setting OutFile to fd 1 ...
I1202 20:15:44.625206  475113 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:15:44.625217  475113 out.go:374] Setting ErrFile to fd 2...
I1202 20:15:44.625221  475113 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:15:44.625419  475113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
I1202 20:15:44.625981  475113 config.go:182] Loaded profile config "functional-136749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 20:15:44.626094  475113 config.go:182] Loaded profile config "functional-136749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 20:15:44.626501  475113 cli_runner.go:164] Run: docker container inspect functional-136749 --format={{.State.Status}}
I1202 20:15:44.646206  475113 ssh_runner.go:195] Run: systemctl --version
I1202 20:15:44.646260  475113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-136749
I1202 20:15:44.665283  475113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/functional-136749/id_rsa Username:docker}
I1202 20:15:44.764898  475113 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-136749 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:dfca5e5f4caae19c3ac20d841ab02fe19647ef0dd97c41424007cceb417af7db
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79190589"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:f852fad6b028092c481b57e7fcd16936a8aec43c2e4dccf5a0600946a449c2a3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52744336"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:09c404d47c88be54eaaf0af6edaecdc1a417bcf04522ffeaf62c4dc0ed5a6d10
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63582165"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:dd50de52ebf30a673c65da77c8b4af5cbc6be3c475a2d8165796a7a7bdd0b9d5
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90816810"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5e3bd70d468022881b995e23abf02a2d39ee87ebacd7018f6c478d9e01870b8b
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76869776"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0ed737a63ad50cf0d7049b0bd88755be8d5bc9fb5e39efdece79639b998532f6
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71976228"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:a8ad62a46c568df922febd0986d02f88bfe5e1a8f5e8dd0bd02a0cafffba019b
repoTags:
- registry.k8s.io/pause:3.10.1
size: "739536"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31468661"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-136749 image ls --format yaml --alsologtostderr:
I1202 20:15:42.644224  474808 out.go:360] Setting OutFile to fd 1 ...
I1202 20:15:42.644480  474808 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:15:42.644489  474808 out.go:374] Setting ErrFile to fd 2...
I1202 20:15:42.644493  474808 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:15:42.644680  474808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
I1202 20:15:42.645251  474808 config.go:182] Loaded profile config "functional-136749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 20:15:42.645345  474808 config.go:182] Loaded profile config "functional-136749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 20:15:42.645796  474808 cli_runner.go:164] Run: docker container inspect functional-136749 --format={{.State.Status}}
I1202 20:15:42.665149  474808 ssh_runner.go:195] Run: systemctl --version
I1202 20:15:42.665214  474808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-136749
I1202 20:15:42.684248  474808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/functional-136749/id_rsa Username:docker}
I1202 20:15:42.783204  474808 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136749 ssh pgrep buildkitd: exit status 1 (276.680356ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 image build -t localhost/my-image:functional-136749 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-136749 image build -t localhost/my-image:functional-136749 testdata/build --alsologtostderr: (3.314881257s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-136749 image build -t localhost/my-image:functional-136749 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 16c369aeaa8
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-136749
--> b0a4a9bd865
Successfully tagged localhost/my-image:functional-136749
b0a4a9bd8656e76c1d7810e69a59ed9093614c2e757b78ee0c4cabbc5ae05c34
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-136749 image build -t localhost/my-image:functional-136749 testdata/build --alsologtostderr:
I1202 20:15:43.160743  474981 out.go:360] Setting OutFile to fd 1 ...
I1202 20:15:43.160840  474981 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:15:43.160848  474981 out.go:374] Setting ErrFile to fd 2...
I1202 20:15:43.160852  474981 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:15:43.161043  474981 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
I1202 20:15:43.161646  474981 config.go:182] Loaded profile config "functional-136749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 20:15:43.162367  474981 config.go:182] Loaded profile config "functional-136749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 20:15:43.162856  474981 cli_runner.go:164] Run: docker container inspect functional-136749 --format={{.State.Status}}
I1202 20:15:43.182856  474981 ssh_runner.go:195] Run: systemctl --version
I1202 20:15:43.182920  474981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-136749
I1202 20:15:43.201013  474981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/functional-136749/id_rsa Username:docker}
I1202 20:15:43.301338  474981 build_images.go:162] Building image from path: /tmp/build.1283620926.tar
I1202 20:15:43.301438  474981 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1202 20:15:43.310607  474981 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1283620926.tar
I1202 20:15:43.314697  474981 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1283620926.tar: stat -c "%s %y" /var/lib/minikube/build/build.1283620926.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1283620926.tar': No such file or directory
I1202 20:15:43.314735  474981 ssh_runner.go:362] scp /tmp/build.1283620926.tar --> /var/lib/minikube/build/build.1283620926.tar (3072 bytes)
I1202 20:15:43.333919  474981 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1283620926
I1202 20:15:43.342450  474981 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1283620926 -xf /var/lib/minikube/build/build.1283620926.tar
I1202 20:15:43.351170  474981 crio.go:315] Building image: /var/lib/minikube/build/build.1283620926
I1202 20:15:43.351251  474981 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-136749 /var/lib/minikube/build/build.1283620926 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1202 20:15:46.385187  474981 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-136749 /var/lib/minikube/build/build.1283620926 --cgroup-manager=cgroupfs: (3.03389846s)
I1202 20:15:46.385272  474981 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1283620926
I1202 20:15:46.394568  474981 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1283620926.tar
I1202 20:15:46.403235  474981 build_images.go:218] Built localhost/my-image:functional-136749 from /tmp/build.1283620926.tar
I1202 20:15:46.403278  474981 build_images.go:134] succeeded building to: functional-136749
I1202 20:15:46.403285  474981 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 image ls
E1202 20:16:56.097299  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:17:47.562812  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-536475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:17:47.569339  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-536475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:17:47.580787  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-536475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:17:47.602301  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-536475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:17:47.643861  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-536475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:17:47.725449  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-536475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:17:47.887158  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-536475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:17:48.208828  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-536475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:17:48.850612  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-536475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:17:50.132549  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-536475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:17:52.694511  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-536475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:17:57.816560  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-536475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:18:08.058533  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-536475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:18:19.163423  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:18:28.540266  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-536475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:19:09.502242  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-536475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:20:31.424122  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-536475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:21:56.096800  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:22:47.562315  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-536475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:23:15.266319  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-536475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-136749
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-136749 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-136749 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-136749 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-136749 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 468748: os: process already finished
helpers_test.go:525: unable to kill pid 468442: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-136749 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (10.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-136749 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [44212aac-7bac-458e-a645-8c03f656ff7c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [44212aac-7bac-458e-a645-8c03f656ff7c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003406704s
I1202 20:15:14.956567  411032 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (10.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 image rm kicbase/echo-server:functional-136749 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-136749 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.63.52 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-136749 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "362.039495ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "65.998978ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "353.668268ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "73.711502ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (7.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-136749 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3566056901/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764706517441279377" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3566056901/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764706517441279377" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3566056901/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764706517441279377" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3566056901/001/test-1764706517441279377
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136749 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (301.967267ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 20:15:17.743576  411032 retry.go:31] will retry after 592.719536ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  2 20:15 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  2 20:15 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  2 20:15 test-1764706517441279377
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh cat /mount-9p/test-1764706517441279377
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-136749 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [361c2f9e-15a6-4c6b-86e1-1a849a98e28f] Pending
helpers_test.go:352: "busybox-mount" [361c2f9e-15a6-4c6b-86e1-1a849a98e28f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [361c2f9e-15a6-4c6b-86e1-1a849a98e28f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [361c2f9e-15a6-4c6b-86e1-1a849a98e28f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.0039227s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-136749 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-136749 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3566056901/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (7.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-136749 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2133601519/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136749 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (295.587175ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 20:15:25.686974  411032 retry.go:31] will retry after 478.012622ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-136749 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2133601519/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136749 ssh "sudo umount -f /mount-9p": exit status 1 (290.223874ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-136749 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-136749 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2133601519/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-136749 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo267375188/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-136749 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo267375188/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-136749 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo267375188/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136749 ssh "findmnt -T" /mount1: exit status 1 (355.112093ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 20:15:27.602105  411032 retry.go:31] will retry after 355.73841ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-136749 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-136749 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo267375188/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-136749 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo267375188/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-136749 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo267375188/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-136749 service list: (1.734565846s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-136749 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-136749 service list -o json: (1.730392394s)
functional_test.go:1504: Took "1.730513707s" to run "out/minikube-linux-amd64 -p functional-136749 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-136749
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-136749
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-136749
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (120.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1202 20:26:56.097045  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-751582 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m59.483311918s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (120.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-751582 kubectl -- rollout status deployment/busybox: (3.397807422s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 kubectl -- exec busybox-7b57f96db7-7kwdl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 kubectl -- exec busybox-7b57f96db7-7mwsq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 kubectl -- exec busybox-7b57f96db7-l8hh4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 kubectl -- exec busybox-7b57f96db7-7kwdl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 kubectl -- exec busybox-7b57f96db7-7mwsq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 kubectl -- exec busybox-7b57f96db7-l8hh4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 kubectl -- exec busybox-7b57f96db7-7kwdl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 kubectl -- exec busybox-7b57f96db7-7mwsq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 kubectl -- exec busybox-7b57f96db7-l8hh4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 kubectl -- exec busybox-7b57f96db7-7kwdl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 kubectl -- exec busybox-7b57f96db7-7kwdl -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 kubectl -- exec busybox-7b57f96db7-7mwsq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 kubectl -- exec busybox-7b57f96db7-7mwsq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 kubectl -- exec busybox-7b57f96db7-l8hh4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 kubectl -- exec busybox-7b57f96db7-l8hh4 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (27.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 node add --alsologtostderr -v 5
E1202 20:27:47.562601  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-536475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-751582 node add --alsologtostderr -v 5: (26.588046138s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (27.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-751582 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 cp testdata/cp-test.txt ha-751582:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 cp ha-751582:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2710947443/001/cp-test_ha-751582.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 cp ha-751582:/home/docker/cp-test.txt ha-751582-m02:/home/docker/cp-test_ha-751582_ha-751582-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582-m02 "sudo cat /home/docker/cp-test_ha-751582_ha-751582-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 cp ha-751582:/home/docker/cp-test.txt ha-751582-m03:/home/docker/cp-test_ha-751582_ha-751582-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582-m03 "sudo cat /home/docker/cp-test_ha-751582_ha-751582-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 cp ha-751582:/home/docker/cp-test.txt ha-751582-m04:/home/docker/cp-test_ha-751582_ha-751582-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582-m04 "sudo cat /home/docker/cp-test_ha-751582_ha-751582-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 cp testdata/cp-test.txt ha-751582-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 cp ha-751582-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2710947443/001/cp-test_ha-751582-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 cp ha-751582-m02:/home/docker/cp-test.txt ha-751582:/home/docker/cp-test_ha-751582-m02_ha-751582.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582 "sudo cat /home/docker/cp-test_ha-751582-m02_ha-751582.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 cp ha-751582-m02:/home/docker/cp-test.txt ha-751582-m03:/home/docker/cp-test_ha-751582-m02_ha-751582-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582-m03 "sudo cat /home/docker/cp-test_ha-751582-m02_ha-751582-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 cp ha-751582-m02:/home/docker/cp-test.txt ha-751582-m04:/home/docker/cp-test_ha-751582-m02_ha-751582-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582-m04 "sudo cat /home/docker/cp-test_ha-751582-m02_ha-751582-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 cp testdata/cp-test.txt ha-751582-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 cp ha-751582-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2710947443/001/cp-test_ha-751582-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 cp ha-751582-m03:/home/docker/cp-test.txt ha-751582:/home/docker/cp-test_ha-751582-m03_ha-751582.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582 "sudo cat /home/docker/cp-test_ha-751582-m03_ha-751582.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 cp ha-751582-m03:/home/docker/cp-test.txt ha-751582-m02:/home/docker/cp-test_ha-751582-m03_ha-751582-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582-m02 "sudo cat /home/docker/cp-test_ha-751582-m03_ha-751582-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 cp ha-751582-m03:/home/docker/cp-test.txt ha-751582-m04:/home/docker/cp-test_ha-751582-m03_ha-751582-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582-m04 "sudo cat /home/docker/cp-test_ha-751582-m03_ha-751582-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 cp testdata/cp-test.txt ha-751582-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 cp ha-751582-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2710947443/001/cp-test_ha-751582-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 cp ha-751582-m04:/home/docker/cp-test.txt ha-751582:/home/docker/cp-test_ha-751582-m04_ha-751582.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582 "sudo cat /home/docker/cp-test_ha-751582-m04_ha-751582.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 cp ha-751582-m04:/home/docker/cp-test.txt ha-751582-m02:/home/docker/cp-test_ha-751582-m04_ha-751582-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582-m02 "sudo cat /home/docker/cp-test_ha-751582-m04_ha-751582-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 cp ha-751582-m04:/home/docker/cp-test.txt ha-751582-m03:/home/docker/cp-test_ha-751582-m04_ha-751582-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 ssh -n ha-751582-m03 "sudo cat /home/docker/cp-test_ha-751582-m04_ha-751582-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-751582 node stop m02 --alsologtostderr -v 5: (13.693744161s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-751582 status --alsologtostderr -v 5: exit status 7 (740.109103ms)

                                                
                                                
-- stdout --
	ha-751582
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-751582-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-751582-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-751582-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 20:28:22.274738  499596 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:28:22.275042  499596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:28:22.275051  499596 out.go:374] Setting ErrFile to fd 2...
	I1202 20:28:22.275057  499596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:28:22.275287  499596 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:28:22.275492  499596 out.go:368] Setting JSON to false
	I1202 20:28:22.275526  499596 mustload.go:66] Loading cluster: ha-751582
	I1202 20:28:22.275610  499596 notify.go:221] Checking for updates...
	I1202 20:28:22.275965  499596 config.go:182] Loaded profile config "ha-751582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:28:22.275984  499596 status.go:174] checking status of ha-751582 ...
	I1202 20:28:22.276520  499596 cli_runner.go:164] Run: docker container inspect ha-751582 --format={{.State.Status}}
	I1202 20:28:22.296059  499596 status.go:371] ha-751582 host status = "Running" (err=<nil>)
	I1202 20:28:22.296127  499596 host.go:66] Checking if "ha-751582" exists ...
	I1202 20:28:22.296413  499596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-751582
	I1202 20:28:22.315341  499596 host.go:66] Checking if "ha-751582" exists ...
	I1202 20:28:22.315665  499596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:28:22.315722  499596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-751582
	I1202 20:28:22.336120  499596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/ha-751582/id_rsa Username:docker}
	I1202 20:28:22.435655  499596 ssh_runner.go:195] Run: systemctl --version
	I1202 20:28:22.442846  499596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:28:22.458095  499596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:28:22.521461  499596 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-02 20:28:22.510448483 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:28:22.521998  499596 kubeconfig.go:125] found "ha-751582" server: "https://192.168.49.254:8443"
	I1202 20:28:22.522029  499596 api_server.go:166] Checking apiserver status ...
	I1202 20:28:22.522063  499596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:28:22.534378  499596 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1268/cgroup
	W1202 20:28:22.543430  499596 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1268/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:28:22.543491  499596 ssh_runner.go:195] Run: ls
	I1202 20:28:22.547446  499596 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1202 20:28:22.552730  499596 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1202 20:28:22.552756  499596 status.go:463] ha-751582 apiserver status = Running (err=<nil>)
	I1202 20:28:22.552766  499596 status.go:176] ha-751582 status: &{Name:ha-751582 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 20:28:22.552782  499596 status.go:174] checking status of ha-751582-m02 ...
	I1202 20:28:22.553044  499596 cli_runner.go:164] Run: docker container inspect ha-751582-m02 --format={{.State.Status}}
	I1202 20:28:22.571536  499596 status.go:371] ha-751582-m02 host status = "Stopped" (err=<nil>)
	I1202 20:28:22.571560  499596 status.go:384] host is not running, skipping remaining checks
	I1202 20:28:22.571569  499596 status.go:176] ha-751582-m02 status: &{Name:ha-751582-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 20:28:22.571598  499596 status.go:174] checking status of ha-751582-m03 ...
	I1202 20:28:22.571886  499596 cli_runner.go:164] Run: docker container inspect ha-751582-m03 --format={{.State.Status}}
	I1202 20:28:22.591115  499596 status.go:371] ha-751582-m03 host status = "Running" (err=<nil>)
	I1202 20:28:22.591145  499596 host.go:66] Checking if "ha-751582-m03" exists ...
	I1202 20:28:22.591413  499596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-751582-m03
	I1202 20:28:22.611843  499596 host.go:66] Checking if "ha-751582-m03" exists ...
	I1202 20:28:22.612142  499596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:28:22.612179  499596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-751582-m03
	I1202 20:28:22.631562  499596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/ha-751582-m03/id_rsa Username:docker}
	I1202 20:28:22.731825  499596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:28:22.745724  499596 kubeconfig.go:125] found "ha-751582" server: "https://192.168.49.254:8443"
	I1202 20:28:22.745751  499596 api_server.go:166] Checking apiserver status ...
	I1202 20:28:22.745780  499596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:28:22.757470  499596 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	W1202 20:28:22.767345  499596 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:28:22.767402  499596 ssh_runner.go:195] Run: ls
	I1202 20:28:22.771686  499596 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1202 20:28:22.777635  499596 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1202 20:28:22.777674  499596 status.go:463] ha-751582-m03 apiserver status = Running (err=<nil>)
	I1202 20:28:22.777683  499596 status.go:176] ha-751582-m03 status: &{Name:ha-751582-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 20:28:22.777705  499596 status.go:174] checking status of ha-751582-m04 ...
	I1202 20:28:22.778040  499596 cli_runner.go:164] Run: docker container inspect ha-751582-m04 --format={{.State.Status}}
	I1202 20:28:22.797721  499596 status.go:371] ha-751582-m04 host status = "Running" (err=<nil>)
	I1202 20:28:22.797752  499596 host.go:66] Checking if "ha-751582-m04" exists ...
	I1202 20:28:22.798119  499596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-751582-m04
	I1202 20:28:22.817331  499596 host.go:66] Checking if "ha-751582-m04" exists ...
	I1202 20:28:22.817663  499596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:28:22.817701  499596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-751582-m04
	I1202 20:28:22.837303  499596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/ha-751582-m04/id_rsa Username:docker}
	I1202 20:28:22.936434  499596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:28:22.948993  499596 status.go:176] ha-751582-m04 status: &{Name:ha-751582-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-751582 node start m02 --alsologtostderr -v 5: (7.8799086s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (119.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-751582 stop --alsologtostderr -v 5: (50.198600591s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 start --wait true --alsologtostderr -v 5
E1202 20:30:02.841053  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-136749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:30:02.847604  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-136749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:30:02.859124  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-136749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:30:02.881043  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-136749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:30:02.923088  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-136749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:30:03.004405  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-136749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:30:03.166519  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-136749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:30:03.488768  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-136749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:30:04.130350  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-136749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:30:05.412299  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-136749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:30:07.973744  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-136749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:30:13.096029  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-136749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:30:23.338284  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-136749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-751582 start --wait true --alsologtostderr -v 5: (1m9.479042676s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (119.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-751582 node delete m03 --alsologtostderr -v 5: (9.904363485s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 status --alsologtostderr -v 5
E1202 20:30:43.820446  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-136749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (48.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 stop --alsologtostderr -v 5
E1202 20:31:24.782625  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-136749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-751582 stop --alsologtostderr -v 5: (48.680161019s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-751582 status --alsologtostderr -v 5: exit status 7 (129.280829ms)

                                                
                                                
-- stdout --
	ha-751582
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-751582-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-751582-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 20:31:33.675611  513941 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:31:33.675881  513941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:31:33.675892  513941 out.go:374] Setting ErrFile to fd 2...
	I1202 20:31:33.675896  513941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:31:33.676112  513941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:31:33.676306  513941 out.go:368] Setting JSON to false
	I1202 20:31:33.676338  513941 mustload.go:66] Loading cluster: ha-751582
	I1202 20:31:33.676494  513941 notify.go:221] Checking for updates...
	I1202 20:31:33.676855  513941 config.go:182] Loaded profile config "ha-751582": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:31:33.676878  513941 status.go:174] checking status of ha-751582 ...
	I1202 20:31:33.677487  513941 cli_runner.go:164] Run: docker container inspect ha-751582 --format={{.State.Status}}
	I1202 20:31:33.697576  513941 status.go:371] ha-751582 host status = "Stopped" (err=<nil>)
	I1202 20:31:33.697603  513941 status.go:384] host is not running, skipping remaining checks
	I1202 20:31:33.697610  513941 status.go:176] ha-751582 status: &{Name:ha-751582 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 20:31:33.697668  513941 status.go:174] checking status of ha-751582-m02 ...
	I1202 20:31:33.698029  513941 cli_runner.go:164] Run: docker container inspect ha-751582-m02 --format={{.State.Status}}
	I1202 20:31:33.718153  513941 status.go:371] ha-751582-m02 host status = "Stopped" (err=<nil>)
	I1202 20:31:33.718177  513941 status.go:384] host is not running, skipping remaining checks
	I1202 20:31:33.718184  513941 status.go:176] ha-751582-m02 status: &{Name:ha-751582-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 20:31:33.718212  513941 status.go:174] checking status of ha-751582-m04 ...
	I1202 20:31:33.718484  513941 cli_runner.go:164] Run: docker container inspect ha-751582-m04 --format={{.State.Status}}
	I1202 20:31:33.737301  513941 status.go:371] ha-751582-m04 host status = "Stopped" (err=<nil>)
	I1202 20:31:33.737332  513941 status.go:384] host is not running, skipping remaining checks
	I1202 20:31:33.737340  513941 status.go:176] ha-751582-m04 status: &{Name:ha-751582-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (48.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (56.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1202 20:31:56.097292  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-751582 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (55.576060445s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (56.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 node add --control-plane --alsologtostderr -v 5
E1202 20:32:46.705019  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-136749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:32:47.562754  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-536475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-751582 node add --control-plane --alsologtostderr -v 5: (43.882265406s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-751582 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.95s)

                                                
                                    
x
+
TestJSONOutput/start/Command (37.31s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-194418 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-194418 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (37.30734569s)
--- PASS: TestJSONOutput/start/Command (37.31s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.07s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-194418 --output=json --user=testUser
E1202 20:34:10.630761  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-536475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-194418 --output=json --user=testUser: (8.074655067s)
--- PASS: TestJSONOutput/stop/Command (8.07s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-131737 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-131737 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (88.766039ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fcd3df36-2d23-4933-8ad9-61a3fded0b42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-131737] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"758a412f-b3f8-44c5-a11b-329d9e2da3cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21997"}}
	{"specversion":"1.0","id":"f869148f-d4eb-41ab-875c-83a752f80f37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8aba4709-68e3-4e39-9f30-adbcb047512b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig"}}
	{"specversion":"1.0","id":"5d31e83d-1485-4d9c-99b6-781ba1078cf5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube"}}
	{"specversion":"1.0","id":"47b7be53-aa86-405c-9a2c-76d8fb189135","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"83e45d24-45c9-4ea4-a5d3-c941185bc51f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d92bfcfe-9a5e-49db-bbd9-7ea56e97989f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-131737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-131737
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.29s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-746508 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-746508 --network=: (29.05665857s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-746508" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-746508
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-746508: (2.21500786s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.29s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (21.51s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-806572 --network=bridge
E1202 20:34:59.165280  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:35:02.842029  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-136749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-806572 --network=bridge: (19.442058424s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-806572" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-806572
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-806572: (2.046929427s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (21.51s)

                                                
                                    
x
+
TestKicExistingNetwork (24.5s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1202 20:35:12.693771  411032 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1202 20:35:12.711610  411032 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1202 20:35:12.711683  411032 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1202 20:35:12.711705  411032 cli_runner.go:164] Run: docker network inspect existing-network
W1202 20:35:12.729439  411032 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1202 20:35:12.729478  411032 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1202 20:35:12.729498  411032 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1202 20:35:12.729657  411032 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1202 20:35:12.748061  411032 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-acf081edf266 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:04:c0:60:47:62} reservation:<nil>}
I1202 20:35:12.748475  411032 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b49610}
I1202 20:35:12.748496  411032 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1202 20:35:12.748540  411032 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1202 20:35:12.797915  411032 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-943146 --network=existing-network
E1202 20:35:30.549286  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-136749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-943146 --network=existing-network: (22.305383767s)
helpers_test.go:175: Cleaning up "existing-network-943146" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-943146
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-943146: (2.055608826s)
I1202 20:35:37.177560  411032 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.50s)

                                                
                                    
x
+
TestKicCustomSubnet (29.18s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-022987 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-022987 --subnet=192.168.60.0/24: (26.937621271s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-022987 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-022987" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-022987
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-022987: (2.21752839s)
--- PASS: TestKicCustomSubnet (29.18s)

                                                
                                    
x
+
TestKicStaticIP (23.91s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-868959 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-868959 --static-ip=192.168.200.200: (21.532761324s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-868959 ip
helpers_test.go:175: Cleaning up "static-ip-868959" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-868959
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-868959: (2.204044495s)
--- PASS: TestKicStaticIP (23.91s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (50.49s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-906383 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-906383 --driver=docker  --container-runtime=crio: (20.14621471s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-909491 --driver=docker  --container-runtime=crio
E1202 20:36:56.098216  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-909491 --driver=docker  --container-runtime=crio: (24.203216985s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-906383
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-909491
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-909491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-909491
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-909491: (2.41688329s)
helpers_test.go:175: Cleaning up "first-906383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-906383
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-906383: (2.397907312s)
--- PASS: TestMinikubeProfile (50.49s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-162617 --memory=3072 --mount-string /tmp/TestMountStartserial1286580540/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-162617 --memory=3072 --mount-string /tmp/TestMountStartserial1286580540/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.154082558s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-162617 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-182058 --memory=3072 --mount-string /tmp/TestMountStartserial1286580540/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-182058 --memory=3072 --mount-string /tmp/TestMountStartserial1286580540/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.045908444s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-182058 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-162617 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-162617 --alsologtostderr -v=5: (1.711186374s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-182058 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-182058
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-182058: (1.270044809s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.81s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-182058
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-182058: (6.811088941s)
--- PASS: TestMountStart/serial/RestartStopped (7.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-182058 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (63.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-605284 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1202 20:37:47.562371  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-536475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-605284 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m3.322230555s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (63.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-605284 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-605284 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-605284 -- rollout status deployment/busybox: (2.726445103s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-605284 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-605284 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-605284 -- exec busybox-7b57f96db7-2mmh6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-605284 -- exec busybox-7b57f96db7-pjmh8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-605284 -- exec busybox-7b57f96db7-2mmh6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-605284 -- exec busybox-7b57f96db7-pjmh8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-605284 -- exec busybox-7b57f96db7-2mmh6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-605284 -- exec busybox-7b57f96db7-pjmh8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.21s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-605284 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-605284 -- exec busybox-7b57f96db7-2mmh6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-605284 -- exec busybox-7b57f96db7-2mmh6 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-605284 -- exec busybox-7b57f96db7-pjmh8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-605284 -- exec busybox-7b57f96db7-pjmh8 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (24.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-605284 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-605284 -v=5 --alsologtostderr: (23.363986729s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (24.04s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-605284 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 cp testdata/cp-test.txt multinode-605284:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 ssh -n multinode-605284 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 cp multinode-605284:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4049158651/001/cp-test_multinode-605284.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 ssh -n multinode-605284 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 cp multinode-605284:/home/docker/cp-test.txt multinode-605284-m02:/home/docker/cp-test_multinode-605284_multinode-605284-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 ssh -n multinode-605284 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 ssh -n multinode-605284-m02 "sudo cat /home/docker/cp-test_multinode-605284_multinode-605284-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 cp multinode-605284:/home/docker/cp-test.txt multinode-605284-m03:/home/docker/cp-test_multinode-605284_multinode-605284-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 ssh -n multinode-605284 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 ssh -n multinode-605284-m03 "sudo cat /home/docker/cp-test_multinode-605284_multinode-605284-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 cp testdata/cp-test.txt multinode-605284-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 ssh -n multinode-605284-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 cp multinode-605284-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4049158651/001/cp-test_multinode-605284-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 ssh -n multinode-605284-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 cp multinode-605284-m02:/home/docker/cp-test.txt multinode-605284:/home/docker/cp-test_multinode-605284-m02_multinode-605284.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 ssh -n multinode-605284-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 ssh -n multinode-605284 "sudo cat /home/docker/cp-test_multinode-605284-m02_multinode-605284.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 cp multinode-605284-m02:/home/docker/cp-test.txt multinode-605284-m03:/home/docker/cp-test_multinode-605284-m02_multinode-605284-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 ssh -n multinode-605284-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 ssh -n multinode-605284-m03 "sudo cat /home/docker/cp-test_multinode-605284-m02_multinode-605284-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 cp testdata/cp-test.txt multinode-605284-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 ssh -n multinode-605284-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 cp multinode-605284-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4049158651/001/cp-test_multinode-605284-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 ssh -n multinode-605284-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 cp multinode-605284-m03:/home/docker/cp-test.txt multinode-605284:/home/docker/cp-test_multinode-605284-m03_multinode-605284.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 ssh -n multinode-605284-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 ssh -n multinode-605284 "sudo cat /home/docker/cp-test_multinode-605284-m03_multinode-605284.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 cp multinode-605284-m03:/home/docker/cp-test.txt multinode-605284-m02:/home/docker/cp-test_multinode-605284-m03_multinode-605284-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 ssh -n multinode-605284-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 ssh -n multinode-605284-m02 "sudo cat /home/docker/cp-test_multinode-605284-m03_multinode-605284-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-605284 node stop m03: (1.284570588s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-605284 status: exit status 7 (524.272989ms)

                                                
                                                
-- stdout --
	multinode-605284
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-605284-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-605284-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-605284 status --alsologtostderr: exit status 7 (519.053715ms)

                                                
                                                
-- stdout --
	multinode-605284
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-605284-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-605284-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 20:39:30.844916  573154 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:39:30.845187  573154 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:39:30.845197  573154 out.go:374] Setting ErrFile to fd 2...
	I1202 20:39:30.845201  573154 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:39:30.845410  573154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:39:30.845589  573154 out.go:368] Setting JSON to false
	I1202 20:39:30.845616  573154 mustload.go:66] Loading cluster: multinode-605284
	I1202 20:39:30.845773  573154 notify.go:221] Checking for updates...
	I1202 20:39:30.845969  573154 config.go:182] Loaded profile config "multinode-605284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:39:30.845982  573154 status.go:174] checking status of multinode-605284 ...
	I1202 20:39:30.846460  573154 cli_runner.go:164] Run: docker container inspect multinode-605284 --format={{.State.Status}}
	I1202 20:39:30.867178  573154 status.go:371] multinode-605284 host status = "Running" (err=<nil>)
	I1202 20:39:30.867228  573154 host.go:66] Checking if "multinode-605284" exists ...
	I1202 20:39:30.867497  573154 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-605284
	I1202 20:39:30.887281  573154 host.go:66] Checking if "multinode-605284" exists ...
	I1202 20:39:30.887600  573154 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:39:30.887661  573154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-605284
	I1202 20:39:30.906126  573154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33288 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/multinode-605284/id_rsa Username:docker}
	I1202 20:39:31.004754  573154 ssh_runner.go:195] Run: systemctl --version
	I1202 20:39:31.011184  573154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:39:31.024013  573154 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:39:31.078842  573154 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-02 20:39:31.068686644 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:39:31.079630  573154 kubeconfig.go:125] found "multinode-605284" server: "https://192.168.67.2:8443"
	I1202 20:39:31.079668  573154 api_server.go:166] Checking apiserver status ...
	I1202 20:39:31.079714  573154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:39:31.091726  573154 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1248/cgroup
	W1202 20:39:31.101341  573154 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1248/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:39:31.101400  573154 ssh_runner.go:195] Run: ls
	I1202 20:39:31.105795  573154 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1202 20:39:31.111301  573154 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1202 20:39:31.111330  573154 status.go:463] multinode-605284 apiserver status = Running (err=<nil>)
	I1202 20:39:31.111340  573154 status.go:176] multinode-605284 status: &{Name:multinode-605284 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 20:39:31.111357  573154 status.go:174] checking status of multinode-605284-m02 ...
	I1202 20:39:31.111615  573154 cli_runner.go:164] Run: docker container inspect multinode-605284-m02 --format={{.State.Status}}
	I1202 20:39:31.129818  573154 status.go:371] multinode-605284-m02 host status = "Running" (err=<nil>)
	I1202 20:39:31.129844  573154 host.go:66] Checking if "multinode-605284-m02" exists ...
	I1202 20:39:31.130184  573154 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-605284-m02
	I1202 20:39:31.149294  573154 host.go:66] Checking if "multinode-605284-m02" exists ...
	I1202 20:39:31.149597  573154 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:39:31.149639  573154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-605284-m02
	I1202 20:39:31.169673  573154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33293 SSHKeyPath:/home/jenkins/minikube-integration/21997-407427/.minikube/machines/multinode-605284-m02/id_rsa Username:docker}
	I1202 20:39:31.267432  573154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:39:31.280738  573154 status.go:176] multinode-605284-m02 status: &{Name:multinode-605284-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1202 20:39:31.280776  573154 status.go:174] checking status of multinode-605284-m03 ...
	I1202 20:39:31.281042  573154 cli_runner.go:164] Run: docker container inspect multinode-605284-m03 --format={{.State.Status}}
	I1202 20:39:31.299450  573154 status.go:371] multinode-605284-m03 host status = "Stopped" (err=<nil>)
	I1202 20:39:31.299479  573154 status.go:384] host is not running, skipping remaining checks
	I1202 20:39:31.299488  573154 status.go:176] multinode-605284-m03 status: &{Name:multinode-605284-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-605284 node start m03 -v=5 --alsologtostderr: (6.78365154s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.52s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (73.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-605284
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-605284
E1202 20:40:02.844350  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-136749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-605284: (29.664693234s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-605284 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-605284 --wait=true -v=5 --alsologtostderr: (43.803980906s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-605284
--- PASS: TestMultiNode/serial/RestartKeepsNodes (73.60s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-605284 node delete m03: (4.716000436s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-605284 stop: (30.311928931s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-605284 status: exit status 7 (105.378931ms)

                                                
                                                
-- stdout --
	multinode-605284
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-605284-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-605284 status --alsologtostderr: exit status 7 (105.710189ms)

                                                
                                                
-- stdout --
	multinode-605284
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-605284-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 20:41:28.250196  582913 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:41:28.250334  582913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:41:28.250343  582913 out.go:374] Setting ErrFile to fd 2...
	I1202 20:41:28.250347  582913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:41:28.250562  582913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:41:28.250739  582913 out.go:368] Setting JSON to false
	I1202 20:41:28.250767  582913 mustload.go:66] Loading cluster: multinode-605284
	I1202 20:41:28.250957  582913 notify.go:221] Checking for updates...
	I1202 20:41:28.251158  582913 config.go:182] Loaded profile config "multinode-605284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:41:28.251175  582913 status.go:174] checking status of multinode-605284 ...
	I1202 20:41:28.251648  582913 cli_runner.go:164] Run: docker container inspect multinode-605284 --format={{.State.Status}}
	I1202 20:41:28.272109  582913 status.go:371] multinode-605284 host status = "Stopped" (err=<nil>)
	I1202 20:41:28.272138  582913 status.go:384] host is not running, skipping remaining checks
	I1202 20:41:28.272149  582913 status.go:176] multinode-605284 status: &{Name:multinode-605284 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 20:41:28.272203  582913 status.go:174] checking status of multinode-605284-m02 ...
	I1202 20:41:28.272557  582913 cli_runner.go:164] Run: docker container inspect multinode-605284-m02 --format={{.State.Status}}
	I1202 20:41:28.291533  582913 status.go:371] multinode-605284-m02 host status = "Stopped" (err=<nil>)
	I1202 20:41:28.291560  582913 status.go:384] host is not running, skipping remaining checks
	I1202 20:41:28.291568  582913 status.go:176] multinode-605284-m02 status: &{Name:multinode-605284-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.52s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-605284 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1202 20:41:56.096401  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-605284 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (49.888632511s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-605284 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.52s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-605284
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-605284-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-605284-m02 --driver=docker  --container-runtime=crio: exit status 14 (84.920484ms)

                                                
                                                
-- stdout --
	* [multinode-605284-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-605284-m02' is duplicated with machine name 'multinode-605284-m02' in profile 'multinode-605284'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-605284-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-605284-m03 --driver=docker  --container-runtime=crio: (23.972581775s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-605284
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-605284: exit status 80 (319.574489ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-605284 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-605284-m03 already exists in multinode-605284-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-605284-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-605284-m03: (2.440190819s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.88s)

                                                
                                    
x
+
TestPreload (107.92s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-947006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-947006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (49.052411704s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-947006 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-947006 image pull gcr.io/k8s-minikube/busybox: (2.29075582s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-947006
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-947006: (8.106781669s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-947006 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-947006 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (45.789318077s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-947006 image list
helpers_test.go:175: Cleaning up "test-preload-947006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-947006
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-947006: (2.43083841s)
--- PASS: TestPreload (107.92s)

                                                
                                    
x
+
TestScheduledStopUnix (97.93s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-874349 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-874349 --memory=3072 --driver=docker  --container-runtime=crio: (22.357446725s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-874349 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1202 20:45:00.359581  599918 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:45:00.359844  599918 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:45:00.359853  599918 out.go:374] Setting ErrFile to fd 2...
	I1202 20:45:00.359857  599918 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:45:00.360112  599918 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:45:00.360383  599918 out.go:368] Setting JSON to false
	I1202 20:45:00.360480  599918 mustload.go:66] Loading cluster: scheduled-stop-874349
	I1202 20:45:00.360823  599918 config.go:182] Loaded profile config "scheduled-stop-874349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:45:00.360893  599918 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/scheduled-stop-874349/config.json ...
	I1202 20:45:00.361089  599918 mustload.go:66] Loading cluster: scheduled-stop-874349
	I1202 20:45:00.361193  599918 config.go:182] Loaded profile config "scheduled-stop-874349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-874349 -n scheduled-stop-874349
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-874349 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1202 20:45:00.785253  600065 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:45:00.785429  600065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:45:00.785435  600065 out.go:374] Setting ErrFile to fd 2...
	I1202 20:45:00.785439  600065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:45:00.785754  600065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:45:00.786191  600065 out.go:368] Setting JSON to false
	I1202 20:45:00.786408  600065 daemonize_unix.go:73] killing process 599953 as it is an old scheduled stop
	I1202 20:45:00.786523  600065 mustload.go:66] Loading cluster: scheduled-stop-874349
	I1202 20:45:00.786894  600065 config.go:182] Loaded profile config "scheduled-stop-874349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:45:00.786981  600065 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/scheduled-stop-874349/config.json ...
	I1202 20:45:00.787200  600065 mustload.go:66] Loading cluster: scheduled-stop-874349
	I1202 20:45:00.787333  600065 config.go:182] Loaded profile config "scheduled-stop-874349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1202 20:45:00.794185  411032 retry.go:31] will retry after 129.104µs: open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/scheduled-stop-874349/pid: no such file or directory
I1202 20:45:00.795369  411032 retry.go:31] will retry after 186.712µs: open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/scheduled-stop-874349/pid: no such file or directory
I1202 20:45:00.796517  411032 retry.go:31] will retry after 302.603µs: open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/scheduled-stop-874349/pid: no such file or directory
I1202 20:45:00.797674  411032 retry.go:31] will retry after 479.647µs: open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/scheduled-stop-874349/pid: no such file or directory
I1202 20:45:00.798849  411032 retry.go:31] will retry after 713.815µs: open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/scheduled-stop-874349/pid: no such file or directory
I1202 20:45:00.800057  411032 retry.go:31] will retry after 862.847µs: open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/scheduled-stop-874349/pid: no such file or directory
I1202 20:45:00.801248  411032 retry.go:31] will retry after 1.647338ms: open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/scheduled-stop-874349/pid: no such file or directory
I1202 20:45:00.803467  411032 retry.go:31] will retry after 1.111689ms: open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/scheduled-stop-874349/pid: no such file or directory
I1202 20:45:00.805713  411032 retry.go:31] will retry after 3.806409ms: open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/scheduled-stop-874349/pid: no such file or directory
I1202 20:45:00.809994  411032 retry.go:31] will retry after 5.026597ms: open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/scheduled-stop-874349/pid: no such file or directory
I1202 20:45:00.815249  411032 retry.go:31] will retry after 6.997029ms: open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/scheduled-stop-874349/pid: no such file or directory
I1202 20:45:00.822508  411032 retry.go:31] will retry after 9.45695ms: open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/scheduled-stop-874349/pid: no such file or directory
I1202 20:45:00.832819  411032 retry.go:31] will retry after 10.279283ms: open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/scheduled-stop-874349/pid: no such file or directory
I1202 20:45:00.844140  411032 retry.go:31] will retry after 14.548123ms: open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/scheduled-stop-874349/pid: no such file or directory
I1202 20:45:00.859418  411032 retry.go:31] will retry after 42.151168ms: open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/scheduled-stop-874349/pid: no such file or directory
I1202 20:45:00.902740  411032 retry.go:31] will retry after 45.532832ms: open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/scheduled-stop-874349/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-874349 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1202 20:45:02.840355  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-136749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-874349 -n scheduled-stop-874349
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-874349
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-874349 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1202 20:45:26.761452  600703 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:45:26.761553  600703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:45:26.761557  600703 out.go:374] Setting ErrFile to fd 2...
	I1202 20:45:26.761561  600703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:45:26.761795  600703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:45:26.762052  600703 out.go:368] Setting JSON to false
	I1202 20:45:26.762144  600703 mustload.go:66] Loading cluster: scheduled-stop-874349
	I1202 20:45:26.762496  600703 config.go:182] Loaded profile config "scheduled-stop-874349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:45:26.762570  600703 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/scheduled-stop-874349/config.json ...
	I1202 20:45:26.762768  600703 mustload.go:66] Loading cluster: scheduled-stop-874349
	I1202 20:45:26.762865  600703 config.go:182] Loaded profile config "scheduled-stop-874349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-874349
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-874349: exit status 7 (87.751057ms)

                                                
                                                
-- stdout --
	scheduled-stop-874349
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-874349 -n scheduled-stop-874349
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-874349 -n scheduled-stop-874349: exit status 7 (85.525191ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-874349" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-874349
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-874349: (3.919666855s)
--- PASS: TestScheduledStopUnix (97.93s)

                                                
                                    
x
+
TestInsufficientStorage (12.18s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-284261 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-284261 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.602853844s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c8f3ec57-4302-403e-b1ff-805fa6a1cc0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-284261] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0ed4e509-6c79-4823-b4bb-fd4c9d257f05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21997"}}
	{"specversion":"1.0","id":"002bd7d2-4d09-46d7-bddf-b43ef365dbc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8a268ad7-4e4b-45d5-9ed9-1eb7193df273","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig"}}
	{"specversion":"1.0","id":"9fef0c3d-cc1a-42c9-9a1e-3d671141f6e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube"}}
	{"specversion":"1.0","id":"347c5ff3-65a7-49a0-a1a0-583626a6345e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"768ea3a2-804a-46d1-a741-9a049d4c2426","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d62904b1-93dd-43b1-9ccd-eacbba6bb04a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"41a4b3f3-bbb4-47d4-89f2-239c0d007be4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d75f261d-a556-4ad6-9876-7046b21c5ed8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9ac96c29-cf4c-4980-b017-52659199b1ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"46fe4aaa-0785-4052-ad52-248ccae7bf4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-284261\" primary control-plane node in \"insufficient-storage-284261\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"093da5c8-9ded-4792-a25c-19259b5b1796","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764169655-21974 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"40055a49-a1f8-4d1b-8fe6-22e5f0ebe7df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"46de0386-c60f-4171-8b81-8c96bfebe802","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-284261 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-284261 --output=json --layout=cluster: exit status 7 (313.553088ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-284261","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-284261","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 20:46:25.768604  603210 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-284261" does not appear in /home/jenkins/minikube-integration/21997-407427/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-284261 --output=json --layout=cluster
E1202 20:46:25.911026  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-136749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-284261 --output=json --layout=cluster: exit status 7 (313.43962ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-284261","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-284261","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 20:46:26.082891  603320 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-284261" does not appear in /home/jenkins/minikube-integration/21997-407427/kubeconfig
	E1202 20:46:26.093732  603320 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/insufficient-storage-284261/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-284261" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-284261
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-284261: (1.946240242s)
--- PASS: TestInsufficientStorage (12.18s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (296.91s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2095692990 start -p running-upgrade-984874 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2095692990 start -p running-upgrade-984874 --memory=3072 --vm-driver=docker  --container-runtime=crio: (21.128100941s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-984874 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-984874 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m29.478373534s)
helpers_test.go:175: Cleaning up "running-upgrade-984874" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-984874
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-984874: (3.035799033s)
--- PASS: TestRunningBinaryUpgrade (296.91s)

                                                
                                    
x
+
TestKubernetesUpgrade (84.1s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-020528 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-020528 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.502798876s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-020528
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-020528: (1.920808811s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-020528 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-020528 status --format={{.Host}}: exit status 7 (84.991241ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-020528 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-020528 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.874753532s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-020528 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-020528 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-020528 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (84.044457ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-020528] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-020528
	    minikube start -p kubernetes-upgrade-020528 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0205282 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-020528 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-020528 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-020528 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4.922481405s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-020528" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-020528
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-020528: (2.650429071s)
--- PASS: TestKubernetesUpgrade (84.10s)

                                                
                                    
x
+
TestMissingContainerUpgrade (60.19s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.325177893 start -p missing-upgrade-497818 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.325177893 start -p missing-upgrade-497818 --memory=3072 --driver=docker  --container-runtime=crio: (19.420033644s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-497818
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-497818: (1.80959921s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-497818
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-497818 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1202 20:50:02.840594  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-136749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-497818 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.238694482s)
helpers_test.go:175: Cleaning up "missing-upgrade-497818" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-497818
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-497818: (2.439969284s)
--- PASS: TestMissingContainerUpgrade (60.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.22s)

                                                
                                    
x
+
TestPause/serial/Start (55.94s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-796891 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-796891 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (55.942406845s)
--- PASS: TestPause/serial/Start (55.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-811845 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-811845 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (96.699806ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-811845] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-811845 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-811845 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.977693067s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-811845 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (310.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1114311059 start -p stopped-upgrade-814137 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1202 20:46:56.096393  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1114311059 start -p stopped-upgrade-814137 --memory=3072 --vm-driver=docker  --container-runtime=crio: (48.359152676s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1114311059 -p stopped-upgrade-814137 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1114311059 -p stopped-upgrade-814137 stop: (1.249338154s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-814137 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-814137 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m21.376125729s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (310.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-811845 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-811845 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.030968314s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-811845 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-811845 status -o json: exit status 2 (332.767077ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-811845","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-811845
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-811845: (2.007236505s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-811845 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-811845 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.694914074s)
--- PASS: TestNoKubernetes/serial/Start (7.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21997-407427/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-811845 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-811845 "sudo systemctl is-active --quiet service kubelet": exit status 1 (345.536407ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-811845
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-811845: (1.291120919s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.52s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-796891 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-796891 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.505841254s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-811845 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-811845 --driver=docker  --container-runtime=crio: (7.481254127s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-775392 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-775392 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (168.728151ms)

                                                
                                                
-- stdout --
	* [false-775392] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 20:47:31.911678  623164 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:47:31.911934  623164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:47:31.911943  623164 out.go:374] Setting ErrFile to fd 2...
	I1202 20:47:31.911948  623164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:47:31.912154  623164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-407427/.minikube/bin
	I1202 20:47:31.912632  623164 out.go:368] Setting JSON to false
	I1202 20:47:31.913868  623164 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8996,"bootTime":1764699456,"procs":280,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:47:31.913936  623164 start.go:143] virtualization: kvm guest
	I1202 20:47:31.915690  623164 out.go:179] * [false-775392] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 20:47:31.917091  623164 notify.go:221] Checking for updates...
	I1202 20:47:31.917098  623164 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:47:31.918439  623164 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:47:31.919811  623164 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-407427/kubeconfig
	I1202 20:47:31.921107  623164 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-407427/.minikube
	I1202 20:47:31.923096  623164 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:47:31.924420  623164 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:47:31.926232  623164 config.go:182] Loaded profile config "NoKubernetes-811845": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1202 20:47:31.926392  623164 config.go:182] Loaded profile config "pause-796891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:47:31.926505  623164 config.go:182] Loaded profile config "stopped-upgrade-814137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1202 20:47:31.926625  623164 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:47:31.951009  623164 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 20:47:31.951144  623164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:47:32.011361  623164 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-02 20:47:32.00108819 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 20:47:32.011480  623164 docker.go:319] overlay module found
	I1202 20:47:32.013143  623164 out.go:179] * Using the docker driver based on user configuration
	I1202 20:47:32.014250  623164 start.go:309] selected driver: docker
	I1202 20:47:32.014270  623164 start.go:927] validating driver "docker" against <nil>
	I1202 20:47:32.014283  623164 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:47:32.015834  623164 out.go:203] 
	W1202 20:47:32.016885  623164 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1202 20:47:32.018010  623164 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-775392 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-775392

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-775392

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-775392

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-775392

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-775392

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-775392

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-775392

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-775392

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-775392

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-775392

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-775392

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-775392" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-775392" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 20:47:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-796891
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 20:47:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-814137
contexts:
- context:
cluster: pause-796891
extensions:
- extension:
last-update: Tue, 02 Dec 2025 20:47:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-796891
name: pause-796891
- context:
cluster: stopped-upgrade-814137
user: stopped-upgrade-814137
name: stopped-upgrade-814137
current-context: pause-796891
kind: Config
users:
- name: pause-796891
user:
client-certificate: /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/pause-796891/client.crt
client-key: /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/pause-796891/client.key
- name: stopped-upgrade-814137
user:
client-certificate: /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/stopped-upgrade-814137/client.crt
client-key: /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/stopped-upgrade-814137/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-775392

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775392"

                                                
                                                
----------------------- debugLogs end: false-775392 [took: 3.881772179s] --------------------------------
helpers_test.go:175: Cleaning up "false-775392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-775392
--- PASS: TestNetworkPlugins/group/false (4.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-811845 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-811845 "sudo systemctl is-active --quiet service kubelet": exit status 1 (330.424993ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-775392 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1202 20:50:50.632476  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-536475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-775392 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (41.123067399s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-775392 "pgrep -a kubelet"
I1202 20:51:16.917814  411032 config.go:182] Loaded profile config "auto-775392": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-775392 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kw7rn" [b355cbd1-f9f6-4061-8467-a006939fc5e7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kw7rn" [b355cbd1-f9f6-4061-8467-a006939fc5e7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004309458s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-775392 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-775392 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-775392 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-814137
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-814137: (1.282385075s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (42.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-775392 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-775392 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (42.870823408s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (42.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (59.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-775392 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-775392 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (59.150148145s)
--- PASS: TestNetworkPlugins/group/calico/Start (59.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (50.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-775392 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1202 20:51:56.096366  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/addons-893295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-775392 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (50.802851076s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (50.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-jr27q" [48f56898-1055-4092-8d40-3766d6387473] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004911706s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-775392 "pgrep -a kubelet"
I1202 20:52:31.511514  411032 config.go:182] Loaded profile config "kindnet-775392": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-775392 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-npslj" [174232fb-d3d3-44bc-98eb-5d85d6d91543] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-npslj" [174232fb-d3d3-44bc-98eb-5d85d6d91543] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.005731135s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-775392 "pgrep -a kubelet"
I1202 20:52:38.519440  411032 config.go:182] Loaded profile config "custom-flannel-775392": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-775392 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-z2627" [bdcc431a-a888-4107-98ae-fb7abe1518c4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-z2627" [bdcc431a-a888-4107-98ae-fb7abe1518c4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004669089s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-775392 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-775392 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-775392 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-9qtdv" [6194de21-876f-4527-b562-369d6cca47cc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004316358s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-775392 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-775392 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-775392 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-775392 "pgrep -a kubelet"
I1202 20:52:52.729168  411032 config.go:182] Loaded profile config "calico-775392": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-775392 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tl622" [bbcb8c54-46c2-400c-abb4-a49800588dda] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tl622" [bbcb8c54-46c2-400c-abb4-a49800588dda] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003682179s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-775392 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-775392 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (40.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-775392 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-775392 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (40.59727267s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (40.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-775392 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (50.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-775392 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-775392 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (50.549226238s)
--- PASS: TestNetworkPlugins/group/flannel/Start (50.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (68.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-775392 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-775392 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m8.63841782s)
--- PASS: TestNetworkPlugins/group/bridge/Start (68.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (52.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-992336 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-992336 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (52.463440056s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (52.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-775392 "pgrep -a kubelet"
I1202 20:53:43.152350  411032 config.go:182] Loaded profile config "enable-default-cni-775392": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-775392 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b97wk" [15e78176-6dcb-42a0-9031-3312bd28bfdd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-b97wk" [15e78176-6dcb-42a0-9031-3312bd28bfdd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003472027s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-775392 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-775392 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-775392 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-9475d" [3241f413-9a01-4b9a-a144-e44f9719da85] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.007164308s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-775392 "pgrep -a kubelet"
I1202 20:53:59.320222  411032 config.go:182] Loaded profile config "flannel-775392": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-775392 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gjs24" [06e7c110-30c5-435f-8066-c55b7ed28958] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gjs24" [06e7c110-30c5-435f-8066-c55b7ed28958] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004302212s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-775392 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-775392 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-775392 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (47.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-336331 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-336331 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (47.375218247s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (47.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-992336 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [80960db9-5402-41bc-8354-45cbf0d86346] Pending
helpers_test.go:352: "busybox" [80960db9-5402-41bc-8354-45cbf0d86346] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [80960db9-5402-41bc-8354-45cbf0d86346] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003707745s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-992336 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-775392 "pgrep -a kubelet"
I1202 20:54:20.421686  411032 config.go:182] Loaded profile config "bridge-775392": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-775392 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zhsws" [102fab75-2554-4a41-9ce4-dd713310bcc9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zhsws" [102fab75-2554-4a41-9ce4-dd713310bcc9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.005949817s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-775392 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-775392 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-775392 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-997805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-997805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (44.33021445s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-992336 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-992336 --alsologtostderr -v=3: (16.536955566s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-992336 -n old-k8s-version-992336
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-992336 -n old-k8s-version-992336: exit status 7 (94.062421ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-992336 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-992336 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-992336 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.473438459s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-992336 -n old-k8s-version-992336
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (31.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-245604 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-245604 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (31.879893243s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (31.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-336331 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [17098746-a5de-4eb1-afef-faf394ddb509] Pending
helpers_test.go:352: "busybox" [17098746-a5de-4eb1-afef-faf394ddb509] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1202 20:55:02.841080  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-136749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [17098746-a5de-4eb1-afef-faf394ddb509] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004081945s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-336331 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-336331 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-336331 --alsologtostderr -v=3: (18.21718132s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-997805 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b5b6709a-d731-4be3-a6d0-ecbcb3655de4] Pending
helpers_test.go:352: "busybox" [b5b6709a-d731-4be3-a6d0-ecbcb3655de4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b5b6709a-d731-4be3-a6d0-ecbcb3655de4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004648134s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-997805 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-245604 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-245604 --alsologtostderr -v=3: (3.112821955s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-997805 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-997805 --alsologtostderr -v=3: (16.36158322s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-245604 -n newest-cni-245604
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-245604 -n newest-cni-245604: exit status 7 (88.962747ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-245604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-245604 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-245604 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (10.442962904s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-245604 -n newest-cni-245604
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-336331 -n no-preload-336331
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-336331 -n no-preload-336331: exit status 7 (85.437523ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-336331 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (45.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-336331 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-336331 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (45.187333629s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-336331 -n no-preload-336331
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (45.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-245604 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-kjcfm" [5a07b7f3-9140-49eb-966b-f8a44aa0fa16] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003912579s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-997805 -n default-k8s-diff-port-997805
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-997805 -n default-k8s-diff-port-997805: exit status 7 (113.102536ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-997805 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-997805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-997805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (49.927972242s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-997805 -n default-k8s-diff-port-997805
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-kjcfm" [5a07b7f3-9140-49eb-966b-f8a44aa0fa16] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005701925s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-992336 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (43.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-386191 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-386191 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (43.756859123s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (43.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-992336 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-njbfb" [1d7cce7a-c39a-4ac1-baf3-3bc2981a5702] Running
E1202 20:56:17.088228  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/auto-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:56:17.094650  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/auto-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:56:17.106158  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/auto-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:56:17.127628  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/auto-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:56:17.169122  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/auto-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:56:17.250634  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/auto-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:56:17.412241  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/auto-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:56:17.734484  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/auto-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:56:18.376916  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/auto-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:56:19.658363  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/auto-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:56:22.220627  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/auto-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003246846s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-njbfb" [1d7cce7a-c39a-4ac1-baf3-3bc2981a5702] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00375077s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-336331 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E1202 20:56:27.342723  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/auto-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-336331 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-386191 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ed12a6fb-53bf-431f-b98e-7d12c1f8a178] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ed12a6fb-53bf-431f-b98e-7d12c1f8a178] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004098495s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-386191 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jz8xk" [cbfcab3a-34f4-49e3-b330-2077b65e6a48] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003307614s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jz8xk" [cbfcab3a-34f4-49e3-b330-2077b65e6a48] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003616355s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-997805 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-997805 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-386191 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-386191 --alsologtostderr -v=3: (18.163469564s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-386191 -n embed-certs-386191
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-386191 -n embed-certs-386191: exit status 7 (84.35718ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-386191 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (47.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-386191 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
E1202 20:57:25.158793  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/kindnet-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:25.165313  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/kindnet-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:25.176849  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/kindnet-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:25.198515  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/kindnet-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:25.240141  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/kindnet-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:25.321744  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/kindnet-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:25.483454  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/kindnet-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:25.805321  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/kindnet-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:26.446987  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/kindnet-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:27.729155  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/kindnet-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:30.290887  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/kindnet-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:35.412890  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/kindnet-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:38.761316  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/custom-flannel-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:38.767769  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/custom-flannel-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:38.779368  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/custom-flannel-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:38.800903  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/custom-flannel-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:38.842475  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/custom-flannel-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:38.924044  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/custom-flannel-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:39.028650  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/auto-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:39.086233  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/custom-flannel-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:39.408002  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/custom-flannel-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:40.049480  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/custom-flannel-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:41.331272  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/custom-flannel-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:43.892991  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/custom-flannel-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:45.654503  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/kindnet-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:46.362309  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/calico-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:46.368795  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/calico-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:46.380364  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/calico-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:46.401912  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/calico-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:46.443462  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/calico-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:46.525105  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/calico-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:46.686708  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/calico-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:47.008495  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/calico-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:47.562710  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/functional-536475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:47.650250  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/calico-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:48.932017  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/calico-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:49.014588  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/custom-flannel-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-386191 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (47.212421625s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-386191 -n embed-certs-386191
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (47.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zkxsp" [ed5d6b00-eaf8-41d7-90ee-e4c7a6a3f869] Running
E1202 20:57:51.493792  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/calico-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:56.615290  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/calico-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003958962s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zkxsp" [ed5d6b00-eaf8-41d7-90ee-e4c7a6a3f869] Running
E1202 20:57:59.256693  411032 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/custom-flannel-775392/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004484319s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-386191 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-386191 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    

Test skip (33/415)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0.14
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
147 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
148 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
149 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
378 TestNetworkPlugins/group/kubenet 4.09
388 TestNetworkPlugins/group/cilium 5.22
394 TestStartStop/group/disable-driver-mounts 0.24
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1202 19:54:39.262212  411032 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
W1202 19:54:39.388975  411032 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
W1202 19:54:39.406761  411032 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-775392 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-775392

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-775392

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-775392

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-775392

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-775392

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-775392

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-775392

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-775392

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-775392

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-775392

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-775392

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-775392" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-775392" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 20:47:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-796891
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 20:47:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-814137
contexts:
- context:
cluster: pause-796891
extensions:
- extension:
last-update: Tue, 02 Dec 2025 20:47:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-796891
name: pause-796891
- context:
cluster: stopped-upgrade-814137
user: stopped-upgrade-814137
name: stopped-upgrade-814137
current-context: pause-796891
kind: Config
users:
- name: pause-796891
user:
client-certificate: /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/pause-796891/client.crt
client-key: /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/pause-796891/client.key
- name: stopped-upgrade-814137
user:
client-certificate: /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/stopped-upgrade-814137/client.crt
client-key: /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/stopped-upgrade-814137/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-775392

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775392"

                                                
                                                
----------------------- debugLogs end: kubenet-775392 [took: 3.918239934s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-775392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-775392
--- SKIP: TestNetworkPlugins/group/kubenet (4.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-775392 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-775392

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-775392

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-775392

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-775392

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-775392

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-775392

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-775392

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-775392

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-775392

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-775392

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-775392

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-775392" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-775392

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-775392

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-775392

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-775392

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-775392" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-775392" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 20:47:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-796891
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-407427/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 20:47:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-814137
contexts:
- context:
cluster: pause-796891
extensions:
- extension:
last-update: Tue, 02 Dec 2025 20:47:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-796891
name: pause-796891
- context:
cluster: stopped-upgrade-814137
user: stopped-upgrade-814137
name: stopped-upgrade-814137
current-context: pause-796891
kind: Config
users:
- name: pause-796891
user:
client-certificate: /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/pause-796891/client.crt
client-key: /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/pause-796891/client.key
- name: stopped-upgrade-814137
user:
client-certificate: /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/stopped-upgrade-814137/client.crt
client-key: /home/jenkins/minikube-integration/21997-407427/.minikube/profiles/stopped-upgrade-814137/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-775392

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-775392" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775392"

                                                
                                                
----------------------- debugLogs end: cilium-775392 [took: 5.033118182s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-775392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-775392
--- SKIP: TestNetworkPlugins/group/cilium (5.22s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-234978" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-234978
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard